00:00:00.000 Started by upstream project "autotest-per-patch" build number 126238 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.935 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.948 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.961 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.961 > git config core.sparsecheckout # timeout=10 00:00:06.971 > git read-tree -mu HEAD # timeout=10 00:00:06.989 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:07.010 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:07.010 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:07.099 [Pipeline] Start of Pipeline 00:00:07.111 [Pipeline] library 00:00:07.112 Loading library shm_lib@master 00:00:07.112 Library shm_lib@master is cached. Copying from home. 00:00:07.125 [Pipeline] node 00:00:07.132 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.133 [Pipeline] { 00:00:07.140 [Pipeline] catchError 00:00:07.141 [Pipeline] { 00:00:07.149 [Pipeline] wrap 00:00:07.156 [Pipeline] { 00:00:07.162 [Pipeline] stage 00:00:07.163 [Pipeline] { (Prologue) 00:00:07.177 [Pipeline] echo 00:00:07.178 Node: VM-host-SM4 00:00:07.182 [Pipeline] cleanWs 00:00:07.189 [WS-CLEANUP] Deleting project workspace... 00:00:07.189 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.193 [WS-CLEANUP] done 00:00:07.420 [Pipeline] setCustomBuildProperty 00:00:07.503 [Pipeline] httpRequest 00:00:07.534 [Pipeline] echo 00:00:07.535 Sorcerer 10.211.164.101 is alive 00:00:07.541 [Pipeline] httpRequest 00:00:07.544 HttpMethod: GET 00:00:07.544 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.545 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.567 Response Code: HTTP/1.1 200 OK 00:00:07.567 Success: Status code 200 is in the accepted range: 200,404 00:00:07.568 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:29.009 [Pipeline] sh 00:00:29.293 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:29.306 [Pipeline] httpRequest 00:00:29.325 [Pipeline] echo 00:00:29.327 Sorcerer 10.211.164.101 is alive 00:00:29.333 [Pipeline] httpRequest 00:00:29.336 HttpMethod: GET 00:00:29.337 URL: http://10.211.164.101/packages/spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:00:29.337 Sending request to url: http://10.211.164.101/packages/spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:00:29.338 Response Code: HTTP/1.1 200 OK 00:00:29.338 Success: Status code 200 is in the accepted range: 200,404 00:00:29.339 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:00:45.958 [Pipeline] sh 00:00:46.244 + tar --no-same-owner -xf spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:00:49.541 [Pipeline] sh 00:00:49.820 + git -C spdk log --oneline -n5 00:00:49.820 996bd8752 blob: Fix spdk_bs_blob_decouple_parent when blob's ancestor is an esnap. 00:00:49.820 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:49.820 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:49.820 2d30d9f83 accel: introduce tasks in sequence limit 00:00:49.820 2728651ee accel: adjust task per ch define name 00:00:49.847 [Pipeline] writeFile 00:00:49.867 [Pipeline] sh 00:00:50.147 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:50.163 [Pipeline] sh 00:00:50.479 + cat autorun-spdk.conf 00:00:50.479 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.479 SPDK_TEST_NVME=1 00:00:50.479 SPDK_TEST_FTL=1 00:00:50.479 SPDK_TEST_ISAL=1 00:00:50.479 SPDK_RUN_ASAN=1 00:00:50.479 SPDK_RUN_UBSAN=1 00:00:50.479 SPDK_TEST_XNVME=1 00:00:50.479 SPDK_TEST_NVME_FDP=1 00:00:50.480 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.486 RUN_NIGHTLY=0 00:00:50.489 [Pipeline] } 00:00:50.509 [Pipeline] // stage 00:00:50.530 [Pipeline] stage 00:00:50.532 [Pipeline] { (Run VM) 00:00:50.546 [Pipeline] sh 00:00:50.825 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:50.825 + echo 'Start stage prepare_nvme.sh' 00:00:50.825 Start stage prepare_nvme.sh 00:00:50.825 + [[ -n 3 ]] 00:00:50.825 + disk_prefix=ex3 00:00:50.825 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:50.825 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:50.825 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:50.825 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.825 ++ SPDK_TEST_NVME=1 00:00:50.825 ++ SPDK_TEST_FTL=1 00:00:50.825 ++ SPDK_TEST_ISAL=1 00:00:50.825 ++ SPDK_RUN_ASAN=1 00:00:50.825 ++ SPDK_RUN_UBSAN=1 00:00:50.825 ++ SPDK_TEST_XNVME=1 00:00:50.825 ++ SPDK_TEST_NVME_FDP=1 00:00:50.825 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.825 ++ RUN_NIGHTLY=0 00:00:50.825 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:50.825 + nvme_files=() 00:00:50.825 + declare -A nvme_files 00:00:50.825 + backend_dir=/var/lib/libvirt/images/backends 00:00:50.825 + nvme_files['nvme.img']=5G 00:00:50.825 + nvme_files['nvme-cmb.img']=5G 00:00:50.825 + nvme_files['nvme-multi0.img']=4G 00:00:50.825 + nvme_files['nvme-multi1.img']=4G 00:00:50.825 + nvme_files['nvme-multi2.img']=4G 00:00:50.825 + nvme_files['nvme-openstack.img']=8G 00:00:50.825 + nvme_files['nvme-zns.img']=5G 00:00:50.825 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:50.825 + (( SPDK_TEST_FTL == 1 )) 00:00:50.825 + nvme_files["nvme-ftl.img"]=6G 00:00:50.825 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:50.825 + nvme_files["nvme-fdp.img"]=1G 00:00:50.825 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:50.825 + for nvme in "${!nvme_files[@]}" 00:00:50.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:50.825 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.825 + for nvme in "${!nvme_files[@]}" 00:00:50.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:00:51.162 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:51.162 + for nvme in "${!nvme_files[@]}" 00:00:51.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:51.162 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.162 + for nvme in "${!nvme_files[@]}" 00:00:51.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:51.162 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:51.162 + for nvme in "${!nvme_files[@]}" 00:00:51.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:51.445 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.445 + for nvme in "${!nvme_files[@]}" 00:00:51.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:51.445 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.445 + for nvme in "${!nvme_files[@]}" 00:00:51.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:51.445 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.445 + for nvme in "${!nvme_files[@]}" 00:00:51.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:00:51.704 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:51.704 + for nvme in "${!nvme_files[@]}" 00:00:51.704 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:52.635 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.635 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:52.635 + echo 'End stage prepare_nvme.sh' 00:00:52.635 End stage prepare_nvme.sh 00:00:52.644 [Pipeline] sh 00:00:52.923 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:52.923 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:52.923 00:00:52.923 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:52.923 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:52.923 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:52.923 HELP=0 00:00:52.923 DRY_RUN=0 00:00:52.923 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:00:52.923 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:52.923 NVME_AUTO_CREATE=0 00:00:52.923 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:00:52.923 NVME_CMB=,,,, 00:00:52.923 NVME_PMR=,,,, 00:00:52.923 NVME_ZNS=,,,, 00:00:52.923 NVME_MS=true,,,, 00:00:52.923 NVME_FDP=,,,on, 00:00:52.923 SPDK_VAGRANT_DISTRO=fedora38 00:00:52.923 SPDK_VAGRANT_VMCPU=10 00:00:52.923 SPDK_VAGRANT_VMRAM=12288 00:00:52.923 SPDK_VAGRANT_PROVIDER=libvirt 00:00:52.923 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:52.923 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:52.923 SPDK_OPENSTACK_NETWORK=0 00:00:52.923 VAGRANT_PACKAGE_BOX=0 00:00:52.923 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:52.923 FORCE_DISTRO=true 00:00:52.923 VAGRANT_BOX_VERSION= 00:00:52.923 EXTRA_VAGRANTFILES= 00:00:52.923 NIC_MODEL=e1000 00:00:52.923 00:00:52.923 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:52.923 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:56.203 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.185 ==> default: Creating image (snapshot of base box volume). 00:00:57.443 ==> default: Creating domain with the following settings... 00:00:57.443 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721071367_b6722c563802cc158240 00:00:57.443 ==> default: -- Domain type: kvm 00:00:57.443 ==> default: -- Cpus: 10 00:00:57.443 ==> default: -- Feature: acpi 00:00:57.443 ==> default: -- Feature: apic 00:00:57.443 ==> default: -- Feature: pae 00:00:57.443 ==> default: -- Memory: 12288M 00:00:57.443 ==> default: -- Memory Backing: hugepages: 00:00:57.443 ==> default: -- Management MAC: 00:00:57.443 ==> default: -- Loader: 00:00:57.443 ==> default: -- Nvram: 00:00:57.443 ==> default: -- Base box: spdk/fedora38 00:00:57.443 ==> default: -- Storage pool: default 00:00:57.443 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721071367_b6722c563802cc158240.img (20G) 00:00:57.443 ==> default: -- Volume Cache: default 00:00:57.443 ==> default: -- Kernel: 00:00:57.443 ==> default: -- Initrd: 00:00:57.443 ==> default: -- Graphics Type: vnc 00:00:57.443 ==> default: -- Graphics Port: -1 00:00:57.443 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.443 ==> default: -- Graphics Password: Not defined 00:00:57.443 ==> default: -- Video Type: cirrus 00:00:57.443 ==> default: -- Video VRAM: 9216 00:00:57.443 ==> default: -- Sound Type: 00:00:57.443 ==> default: -- Keymap: en-us 00:00:57.443 ==> default: -- TPM Path: 00:00:57.443 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.443 ==> default: -- Command line args: 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:57.443 ==> default: -> value=-drive, 00:00:57.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:57.443 ==> default: -> value=-drive, 00:00:57.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:57.443 ==> default: -> value=-drive, 00:00:57.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.443 ==> default: -> value=-drive, 00:00:57.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.443 ==> default: -> value=-drive, 00:00:57.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:57.443 ==> default: -> value=-drive, 00:00:57.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:57.443 ==> default: -> value=-device, 00:00:57.443 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.443 ==> default: Creating shared folders metadata... 00:00:57.701 ==> default: Starting domain. 00:00:59.605 ==> default: Waiting for domain to get an IP address... 00:01:17.693 ==> default: Waiting for SSH to become available... 00:01:17.693 ==> default: Configuring and enabling network interfaces... 00:01:21.877 default: SSH address: 192.168.121.54:22 00:01:21.877 default: SSH username: vagrant 00:01:21.877 default: SSH auth method: private key 00:01:23.770 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:31.887 ==> default: Mounting SSHFS shared folder... 00:01:33.260 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:33.260 ==> default: Checking Mount.. 00:01:34.634 ==> default: Folder Successfully Mounted! 00:01:34.634 ==> default: Running provisioner: file... 00:01:35.230 default: ~/.gitconfig => .gitconfig 00:01:35.796 00:01:35.796 SUCCESS! 00:01:35.796 00:01:35.796 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:35.796 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:35.796 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:35.796 00:01:35.805 [Pipeline] } 00:01:35.825 [Pipeline] // stage 00:01:35.836 [Pipeline] dir 00:01:35.836 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:35.838 [Pipeline] { 00:01:35.853 [Pipeline] catchError 00:01:35.855 [Pipeline] { 00:01:35.870 [Pipeline] sh 00:01:36.146 + vagrant ssh-config --host vagrant 00:01:36.147 + sed -ne /^Host/,$p+ 00:01:36.147 tee ssh_conf 00:01:40.329 Host vagrant 00:01:40.329 HostName 192.168.121.54 00:01:40.329 User vagrant 00:01:40.329 Port 22 00:01:40.329 UserKnownHostsFile /dev/null 00:01:40.329 StrictHostKeyChecking no 00:01:40.329 PasswordAuthentication no 00:01:40.329 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:40.329 IdentitiesOnly yes 00:01:40.329 LogLevel FATAL 00:01:40.329 ForwardAgent yes 00:01:40.329 ForwardX11 yes 00:01:40.329 00:01:40.345 [Pipeline] withEnv 00:01:40.348 [Pipeline] { 00:01:40.366 [Pipeline] sh 00:01:40.644 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:40.644 source /etc/os-release 00:01:40.644 [[ -e /image.version ]] && img=$(< /image.version) 00:01:40.644 # Minimal, systemd-like check. 00:01:40.644 if [[ -e /.dockerenv ]]; then 00:01:40.644 # Clear garbage from the node's name: 00:01:40.644 # agt-er_autotest_547-896 -> autotest_547-896 00:01:40.644 # $HOSTNAME is the actual container id 00:01:40.644 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:40.644 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:40.644 # We can assume this is a mount from a host where container is running, 00:01:40.644 # so fetch its hostname to easily identify the target swarm worker. 00:01:40.644 container="$(< /etc/hostname) ($agent)" 00:01:40.644 else 00:01:40.644 # Fallback 00:01:40.644 container=$agent 00:01:40.644 fi 00:01:40.644 fi 00:01:40.644 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:40.644 00:01:40.911 [Pipeline] } 00:01:40.932 [Pipeline] // withEnv 00:01:40.941 [Pipeline] setCustomBuildProperty 00:01:40.959 [Pipeline] stage 00:01:40.962 [Pipeline] { (Tests) 00:01:40.983 [Pipeline] sh 00:01:41.260 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:41.533 [Pipeline] sh 00:01:41.814 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:42.086 [Pipeline] timeout 00:01:42.086 Timeout set to expire in 40 min 00:01:42.087 [Pipeline] { 00:01:42.097 [Pipeline] sh 00:01:42.369 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:42.939 HEAD is now at 996bd8752 blob: Fix spdk_bs_blob_decouple_parent when blob's ancestor is an esnap. 00:01:42.953 [Pipeline] sh 00:01:43.230 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:43.500 [Pipeline] sh 00:01:43.773 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:44.044 [Pipeline] sh 00:01:44.320 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:44.578 ++ readlink -f spdk_repo 00:01:44.578 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:44.578 + [[ -n /home/vagrant/spdk_repo ]] 00:01:44.578 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:44.578 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:44.578 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:44.578 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:44.578 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:44.578 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:44.578 + cd /home/vagrant/spdk_repo 00:01:44.578 + source /etc/os-release 00:01:44.578 ++ NAME='Fedora Linux' 00:01:44.578 ++ VERSION='38 (Cloud Edition)' 00:01:44.578 ++ ID=fedora 00:01:44.578 ++ VERSION_ID=38 00:01:44.578 ++ VERSION_CODENAME= 00:01:44.578 ++ PLATFORM_ID=platform:f38 00:01:44.578 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:44.578 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.578 ++ LOGO=fedora-logo-icon 00:01:44.578 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:44.578 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.578 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:44.578 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.578 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.578 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.578 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:44.578 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.578 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:44.578 ++ SUPPORT_END=2024-05-14 00:01:44.578 ++ VARIANT='Cloud Edition' 00:01:44.578 ++ VARIANT_ID=cloud 00:01:44.578 + uname -a 00:01:44.578 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:44.578 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:44.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:45.092 Hugepages 00:01:45.092 node hugesize free / total 00:01:45.092 node0 1048576kB 0 / 0 00:01:45.348 node0 2048kB 0 / 0 00:01:45.348 00:01:45.348 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:45.348 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:45.348 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:45.348 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:45.348 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:45.348 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:45.348 + rm -f /tmp/spdk-ld-path 00:01:45.348 + source autorun-spdk.conf 00:01:45.348 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.348 ++ SPDK_TEST_NVME=1 00:01:45.348 ++ SPDK_TEST_FTL=1 00:01:45.348 ++ SPDK_TEST_ISAL=1 00:01:45.348 ++ SPDK_RUN_ASAN=1 00:01:45.348 ++ SPDK_RUN_UBSAN=1 00:01:45.348 ++ SPDK_TEST_XNVME=1 00:01:45.348 ++ SPDK_TEST_NVME_FDP=1 00:01:45.348 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.348 ++ RUN_NIGHTLY=0 00:01:45.348 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:45.348 + [[ -n '' ]] 00:01:45.348 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:45.348 + for M in /var/spdk/build-*-manifest.txt 00:01:45.348 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:45.348 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.348 + for M in /var/spdk/build-*-manifest.txt 00:01:45.348 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:45.348 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:45.348 ++ uname 00:01:45.348 + [[ Linux == \L\i\n\u\x ]] 00:01:45.348 + sudo dmesg -T 00:01:45.348 + sudo dmesg --clear 00:01:45.348 + dmesg_pid=5201 00:01:45.348 + sudo dmesg -Tw 00:01:45.348 + [[ Fedora Linux == FreeBSD ]] 00:01:45.348 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.348 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:45.348 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:45.348 + [[ -x /usr/src/fio-static/fio ]] 00:01:45.348 + export FIO_BIN=/usr/src/fio-static/fio 00:01:45.348 + FIO_BIN=/usr/src/fio-static/fio 00:01:45.348 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:45.348 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:45.348 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:45.348 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:45.348 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:45.348 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:45.348 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:45.348 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:45.348 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:45.348 Test configuration: 00:01:45.348 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.348 SPDK_TEST_NVME=1 00:01:45.348 SPDK_TEST_FTL=1 00:01:45.348 SPDK_TEST_ISAL=1 00:01:45.348 SPDK_RUN_ASAN=1 00:01:45.348 SPDK_RUN_UBSAN=1 00:01:45.348 SPDK_TEST_XNVME=1 00:01:45.348 SPDK_TEST_NVME_FDP=1 00:01:45.348 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.605 RUN_NIGHTLY=0 19:23:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:45.605 19:23:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:45.605 19:23:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.605 19:23:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.605 19:23:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.605 19:23:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.605 19:23:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.605 19:23:36 -- paths/export.sh@5 -- $ export PATH 00:01:45.605 19:23:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.605 19:23:36 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:45.605 19:23:36 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:45.605 19:23:36 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721071416.XXXXXX 00:01:45.605 19:23:36 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721071416.QZmDds 00:01:45.605 19:23:36 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:45.605 19:23:36 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:45.605 19:23:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:45.605 19:23:36 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:45.606 19:23:36 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:45.606 19:23:36 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:45.606 19:23:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:45.606 19:23:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.606 19:23:36 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:45.606 19:23:36 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:45.606 19:23:36 -- pm/common@17 -- $ local monitor 00:01:45.606 19:23:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.606 19:23:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.606 19:23:36 -- pm/common@25 -- $ sleep 1 00:01:45.606 19:23:36 -- pm/common@21 -- $ date +%s 00:01:45.606 19:23:36 -- pm/common@21 -- $ date +%s 00:01:45.606 19:23:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721071416 00:01:45.606 19:23:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721071416 00:01:45.606 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721071416_collect-vmstat.pm.log 00:01:45.606 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721071416_collect-cpu-load.pm.log 00:01:46.534 19:23:37 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:46.534 19:23:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:46.534 19:23:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:46.534 19:23:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:46.534 19:23:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:46.534 Mon Jul 15 07:23:37 PM UTC 2024 00:01:46.534 19:23:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:46.534 v24.09-pre-210-g996bd8752 00:01:46.534 19:23:37 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:46.534 19:23:37 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:46.534 19:23:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:46.534 19:23:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.534 19:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.534 ************************************ 00:01:46.534 START TEST asan 00:01:46.534 ************************************ 00:01:46.534 using asan 00:01:46.534 19:23:37 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:46.534 00:01:46.534 real 0m0.000s 00:01:46.534 user 0m0.000s 00:01:46.534 sys 0m0.000s 00:01:46.534 19:23:37 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:46.534 19:23:37 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:46.534 ************************************ 00:01:46.534 END TEST asan 00:01:46.534 ************************************ 00:01:46.534 19:23:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:46.534 19:23:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:46.534 19:23:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:46.534 19:23:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:46.534 19:23:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.534 19:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.791 ************************************ 00:01:46.791 START TEST ubsan 00:01:46.791 ************************************ 00:01:46.791 using ubsan 00:01:46.791 19:23:37 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:46.791 00:01:46.791 real 0m0.000s 00:01:46.791 user 0m0.000s 00:01:46.791 sys 0m0.000s 00:01:46.791 19:23:37 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:46.791 ************************************ 00:01:46.791 19:23:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:46.791 END TEST ubsan 00:01:46.791 ************************************ 00:01:46.791 19:23:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:46.791 19:23:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:46.791 19:23:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:46.791 19:23:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:46.791 19:23:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:46.791 19:23:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:46.791 19:23:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:46.791 19:23:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:46.791 19:23:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:46.791 19:23:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:46.791 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:46.791 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:47.356 Using 'verbs' RDMA provider 00:02:03.176 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:15.377 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:15.635 Creating mk/config.mk...done. 00:02:15.635 Creating mk/cc.flags.mk...done. 00:02:15.635 Type 'make' to build. 00:02:15.635 19:24:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:15.635 19:24:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:15.635 19:24:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:15.635 19:24:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.901 ************************************ 00:02:15.901 START TEST make 00:02:15.901 ************************************ 00:02:15.901 19:24:06 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:16.160 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:16.160 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:16.160 meson setup builddir \ 00:02:16.160 -Dwith-libaio=enabled \ 00:02:16.160 -Dwith-liburing=enabled \ 00:02:16.161 -Dwith-libvfn=disabled \ 00:02:16.161 -Dwith-spdk=false && \ 00:02:16.161 meson compile -C builddir && \ 00:02:16.161 cd -) 00:02:16.161 make[1]: Nothing to be done for 'all'. 00:02:18.690 The Meson build system 00:02:18.690 Version: 1.3.1 00:02:18.690 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:18.690 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:18.690 Build type: native build 00:02:18.690 Project name: xnvme 00:02:18.690 Project version: 0.7.3 00:02:18.690 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:18.690 C linker for the host machine: cc ld.bfd 2.39-16 00:02:18.690 Host machine cpu family: x86_64 00:02:18.690 Host machine cpu: x86_64 00:02:18.690 Message: host_machine.system: linux 00:02:18.690 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:18.690 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:18.690 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:18.690 Run-time dependency threads found: YES 00:02:18.690 Has header "setupapi.h" : NO 00:02:18.690 Has header "linux/blkzoned.h" : YES 00:02:18.690 Has header "linux/blkzoned.h" : YES (cached) 00:02:18.690 Has header "libaio.h" : YES 00:02:18.690 Library aio found: YES 00:02:18.690 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:18.690 Run-time dependency liburing found: YES 2.2 00:02:18.690 Dependency libvfn skipped: feature with-libvfn disabled 00:02:18.690 Run-time dependency appleframeworks found: NO (tried framework) 00:02:18.690 Run-time dependency appleframeworks found: NO (tried framework) 00:02:18.690 Configuring xnvme_config.h using configuration 00:02:18.690 Configuring xnvme.spec using configuration 00:02:18.690 Run-time dependency bash-completion found: YES 2.11 00:02:18.690 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:18.690 Program cp found: YES (/usr/bin/cp) 00:02:18.690 Has header "winsock2.h" : NO 00:02:18.690 Has header "dbghelp.h" : NO 00:02:18.690 Library rpcrt4 found: NO 00:02:18.690 Library rt found: YES 00:02:18.690 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:18.690 Found CMake: /usr/bin/cmake (3.27.7) 00:02:18.690 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:18.690 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:18.690 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:18.690 Build targets in project: 32 00:02:18.690 00:02:18.690 xnvme 0.7.3 00:02:18.690 00:02:18.690 User defined options 00:02:18.690 with-libaio : enabled 00:02:18.690 with-liburing: enabled 00:02:18.690 with-libvfn : disabled 00:02:18.690 with-spdk : false 00:02:18.690 00:02:18.690 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.253 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:19.253 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:19.253 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:19.253 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:19.511 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:19.511 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:19.511 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:19.511 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:19.511 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:19.511 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:19.511 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:19.511 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:19.511 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:19.511 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:19.511 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:19.511 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:19.511 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:19.511 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:19.511 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:19.511 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:19.767 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:19.767 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:19.767 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:19.767 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:19.767 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:19.767 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:19.767 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:19.767 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:19.767 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:19.767 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:19.767 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:19.767 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:19.767 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:19.767 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:19.767 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:19.767 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:19.767 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:19.767 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:19.767 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:19.767 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:19.767 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:19.767 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:19.767 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:19.767 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:19.767 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:19.767 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:19.767 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:19.767 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:19.767 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:19.767 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:19.767 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:20.024 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:20.024 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:20.024 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:20.024 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:20.024 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:20.024 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:20.024 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:20.024 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:20.024 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:20.024 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:20.024 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:20.024 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:20.024 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:20.024 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:20.024 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:20.315 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:20.315 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:20.315 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:20.315 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:20.315 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:20.315 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:20.315 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:20.315 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:20.315 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:20.315 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:20.315 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:20.315 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:20.315 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:20.315 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:20.315 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:20.315 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:20.315 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:20.315 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:20.574 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:20.574 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:20.574 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:20.574 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:20.574 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:20.574 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:20.574 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:20.574 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:20.574 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:20.574 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:20.574 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:20.574 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:20.574 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:20.574 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:20.574 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:20.574 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:20.574 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:20.574 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:20.574 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:20.574 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:20.574 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:20.574 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:20.574 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:20.832 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:20.832 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:20.832 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:20.832 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:20.832 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:20.832 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:20.832 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:20.832 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:20.832 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:20.832 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:20.832 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:20.832 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:20.832 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:20.832 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:20.832 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:20.832 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:20.832 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:20.832 [124/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:20.832 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:20.832 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:20.832 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:20.832 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:20.832 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:20.832 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:20.832 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:20.832 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:21.089 [133/203] Linking target lib/libxnvme.so 00:02:21.089 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:21.089 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:21.089 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:21.089 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:21.089 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:21.089 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:21.089 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:21.089 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:21.089 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:21.089 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:21.089 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:21.346 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:21.346 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:21.346 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:21.346 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:21.346 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:21.346 [150/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:21.346 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:21.346 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:21.346 [153/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:21.346 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:21.346 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:21.346 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:21.346 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:21.602 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:21.602 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:21.602 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:21.602 [161/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:21.602 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:21.602 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:21.602 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:21.602 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:21.602 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:21.602 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:21.602 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:21.602 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:21.860 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:21.860 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:21.860 [172/203] Linking static target lib/libxnvme.a 00:02:21.860 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:21.860 [174/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:21.860 [175/203] Linking target tests/xnvme_tests_lblk 00:02:21.860 [176/203] Linking target tests/xnvme_tests_enum 00:02:21.860 [177/203] Linking target tests/xnvme_tests_cli 00:02:21.860 [178/203] Linking target tests/xnvme_tests_scc 00:02:21.860 [179/203] Linking target tests/xnvme_tests_znd_append 00:02:21.860 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:02:21.860 [181/203] Linking target tests/xnvme_tests_async_intf 00:02:21.860 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:21.860 [183/203] Linking target tests/xnvme_tests_buf 00:02:21.860 [184/203] Linking target tests/xnvme_tests_ioworker 00:02:21.860 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:21.860 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:21.860 [187/203] Linking target examples/xnvme_enum 00:02:21.860 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:21.860 [189/203] Linking target tests/xnvme_tests_kvs 00:02:21.860 [190/203] Linking target tests/xnvme_tests_map 00:02:21.860 [191/203] Linking target tools/lblk 00:02:21.860 [192/203] Linking target tools/zoned 00:02:21.860 [193/203] Linking target tools/xnvme 00:02:21.860 [194/203] Linking target examples/xnvme_dev 00:02:21.860 [195/203] Linking target tools/xdd 00:02:21.860 [196/203] Linking target tools/kvs 00:02:22.117 [197/203] Linking target tools/xnvme_file 00:02:22.117 [198/203] Linking target examples/xnvme_io_async 00:02:22.117 [199/203] Linking target examples/xnvme_hello 00:02:22.117 [200/203] Linking target examples/xnvme_single_async 00:02:22.117 [201/203] Linking target examples/zoned_io_sync 00:02:22.117 [202/203] Linking target examples/xnvme_single_sync 00:02:22.117 [203/203] Linking target examples/zoned_io_async 00:02:22.117 INFO: autodetecting backend as ninja 00:02:22.117 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:22.117 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:28.744 The Meson build system 00:02:28.744 Version: 1.3.1 00:02:28.744 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:28.744 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:28.744 Build type: native build 00:02:28.744 Program cat found: YES (/usr/bin/cat) 00:02:28.744 Project name: DPDK 00:02:28.744 Project version: 24.03.0 00:02:28.744 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:28.744 C linker for the host machine: cc ld.bfd 2.39-16 00:02:28.744 Host machine cpu family: x86_64 00:02:28.744 Host machine cpu: x86_64 00:02:28.744 Message: ## Building in Developer Mode ## 00:02:28.744 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.744 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:28.744 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.744 Program python3 found: YES (/usr/bin/python3) 00:02:28.744 Program cat found: YES (/usr/bin/cat) 00:02:28.744 Compiler for C supports arguments -march=native: YES 00:02:28.744 Checking for size of "void *" : 8 00:02:28.744 Checking for size of "void *" : 8 (cached) 00:02:28.744 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:28.744 Library m found: YES 00:02:28.744 Library numa found: YES 00:02:28.744 Has header "numaif.h" : YES 00:02:28.744 Library fdt found: NO 00:02:28.744 Library execinfo found: NO 00:02:28.744 Has header "execinfo.h" : YES 00:02:28.744 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:28.744 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.744 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.744 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.744 Run-time dependency openssl found: YES 3.0.9 00:02:28.744 Run-time dependency libpcap found: YES 1.10.4 00:02:28.744 Has header "pcap.h" with dependency libpcap: YES 00:02:28.744 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.744 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.744 Compiler for C supports arguments -Wformat: YES 00:02:28.744 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.744 Compiler for C supports arguments -Wformat-security: NO 00:02:28.744 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.744 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.744 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.744 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.744 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.744 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.744 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.744 Compiler for C supports arguments -Wundef: YES 00:02:28.744 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.744 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.744 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.744 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.744 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.744 Program objdump found: YES (/usr/bin/objdump) 00:02:28.744 Compiler for C supports arguments -mavx512f: YES 00:02:28.744 Checking if "AVX512 checking" compiles: YES 00:02:28.744 Fetching value of define "__SSE4_2__" : 1 00:02:28.744 Fetching value of define "__AES__" : 1 00:02:28.744 Fetching value of define "__AVX__" : 1 00:02:28.744 Fetching value of define "__AVX2__" : 1 00:02:28.744 Fetching value of define "__AVX512BW__" : 1 00:02:28.744 Fetching value of define "__AVX512CD__" : 1 00:02:28.744 Fetching value of define "__AVX512DQ__" : 1 00:02:28.744 Fetching value of define "__AVX512F__" : 1 00:02:28.744 Fetching value of define "__AVX512VL__" : 1 00:02:28.744 Fetching value of define "__PCLMUL__" : 1 00:02:28.744 Fetching value of define "__RDRND__" : 1 00:02:28.744 Fetching value of define "__RDSEED__" : 1 00:02:28.744 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.744 Fetching value of define "__znver1__" : (undefined) 00:02:28.744 Fetching value of define "__znver2__" : (undefined) 00:02:28.744 Fetching value of define "__znver3__" : (undefined) 00:02:28.744 Fetching value of define "__znver4__" : (undefined) 00:02:28.744 Library asan found: YES 00:02:28.744 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.744 Message: lib/log: Defining dependency "log" 00:02:28.744 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.744 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.744 Library rt found: YES 00:02:28.744 Checking for function "getentropy" : NO 00:02:28.744 Message: lib/eal: Defining dependency "eal" 00:02:28.744 Message: lib/ring: Defining dependency "ring" 00:02:28.744 Message: lib/rcu: Defining dependency "rcu" 00:02:28.744 Message: lib/mempool: Defining dependency "mempool" 00:02:28.744 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.744 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.744 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.744 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.744 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.744 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:28.744 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:28.744 Compiler for C supports arguments -mpclmul: YES 00:02:28.744 Compiler for C supports arguments -maes: YES 00:02:28.744 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.744 Compiler for C supports arguments -mavx512bw: YES 00:02:28.744 Compiler for C supports arguments -mavx512dq: YES 00:02:28.744 Compiler for C supports arguments -mavx512vl: YES 00:02:28.744 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.744 Compiler for C supports arguments -mavx2: YES 00:02:28.744 Compiler for C supports arguments -mavx: YES 00:02:28.744 Message: lib/net: Defining dependency "net" 00:02:28.744 Message: lib/meter: Defining dependency "meter" 00:02:28.744 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.744 Message: lib/pci: Defining dependency "pci" 00:02:28.744 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.744 Message: lib/hash: Defining dependency "hash" 00:02:28.744 Message: lib/timer: Defining dependency "timer" 00:02:28.744 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.744 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.744 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.744 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.744 Message: lib/power: Defining dependency "power" 00:02:28.744 Message: lib/reorder: Defining dependency "reorder" 00:02:28.744 Message: lib/security: Defining dependency "security" 00:02:28.744 Has header "linux/userfaultfd.h" : YES 00:02:28.744 Has header "linux/vduse.h" : YES 00:02:28.744 Message: lib/vhost: Defining dependency "vhost" 00:02:28.744 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.745 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.745 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.745 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.745 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:28.745 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:28.745 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:28.745 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:28.745 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:28.745 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:28.745 Program doxygen found: YES (/usr/bin/doxygen) 00:02:28.745 Configuring doxy-api-html.conf using configuration 00:02:28.745 Configuring doxy-api-man.conf using configuration 00:02:28.745 Program mandb found: YES (/usr/bin/mandb) 00:02:28.745 Program sphinx-build found: NO 00:02:28.745 Configuring rte_build_config.h using configuration 00:02:28.745 Message: 00:02:28.745 ================= 00:02:28.745 Applications Enabled 00:02:28.745 ================= 00:02:28.745 00:02:28.745 apps: 00:02:28.745 00:02:28.745 00:02:28.745 Message: 00:02:28.745 ================= 00:02:28.745 Libraries Enabled 00:02:28.745 ================= 00:02:28.745 00:02:28.745 libs: 00:02:28.745 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:28.745 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:28.745 cryptodev, dmadev, power, reorder, security, vhost, 00:02:28.745 00:02:28.745 Message: 00:02:28.745 =============== 00:02:28.745 Drivers Enabled 00:02:28.745 =============== 00:02:28.745 00:02:28.745 common: 00:02:28.745 00:02:28.745 bus: 00:02:28.745 pci, vdev, 00:02:28.745 mempool: 00:02:28.745 ring, 00:02:28.745 dma: 00:02:28.745 00:02:28.745 net: 00:02:28.745 00:02:28.745 crypto: 00:02:28.745 00:02:28.745 compress: 00:02:28.745 00:02:28.745 vdpa: 00:02:28.745 00:02:28.745 00:02:28.745 Message: 00:02:28.745 ================= 00:02:28.745 Content Skipped 00:02:28.745 ================= 00:02:28.745 00:02:28.745 apps: 00:02:28.745 dumpcap: explicitly disabled via build config 00:02:28.745 graph: explicitly disabled via build config 00:02:28.745 pdump: explicitly disabled via build config 00:02:28.745 proc-info: explicitly disabled via build config 00:02:28.745 test-acl: explicitly disabled via build config 00:02:28.745 test-bbdev: explicitly disabled via build config 00:02:28.745 test-cmdline: explicitly disabled via build config 00:02:28.745 test-compress-perf: explicitly disabled via build config 00:02:28.745 test-crypto-perf: explicitly disabled via build config 00:02:28.745 test-dma-perf: explicitly disabled via build config 00:02:28.745 test-eventdev: explicitly disabled via build config 00:02:28.745 test-fib: explicitly disabled via build config 00:02:28.745 test-flow-perf: explicitly disabled via build config 00:02:28.745 test-gpudev: explicitly disabled via build config 00:02:28.745 test-mldev: explicitly disabled via build config 00:02:28.745 test-pipeline: explicitly disabled via build config 00:02:28.745 test-pmd: explicitly disabled via build config 00:02:28.745 test-regex: explicitly disabled via build config 00:02:28.745 test-sad: explicitly disabled via build config 00:02:28.745 test-security-perf: explicitly disabled via build config 00:02:28.745 00:02:28.745 libs: 00:02:28.745 argparse: explicitly disabled via build config 00:02:28.745 metrics: explicitly disabled via build config 00:02:28.745 acl: explicitly disabled via build config 00:02:28.745 bbdev: explicitly disabled via build config 00:02:28.745 bitratestats: explicitly disabled via build config 00:02:28.745 bpf: explicitly disabled via build config 00:02:28.745 cfgfile: explicitly disabled via build config 00:02:28.745 distributor: explicitly disabled via build config 00:02:28.745 efd: explicitly disabled via build config 00:02:28.745 eventdev: explicitly disabled via build config 00:02:28.745 dispatcher: explicitly disabled via build config 00:02:28.745 gpudev: explicitly disabled via build config 00:02:28.745 gro: explicitly disabled via build config 00:02:28.745 gso: explicitly disabled via build config 00:02:28.745 ip_frag: explicitly disabled via build config 00:02:28.745 jobstats: explicitly disabled via build config 00:02:28.745 latencystats: explicitly disabled via build config 00:02:28.745 lpm: explicitly disabled via build config 00:02:28.745 member: explicitly disabled via build config 00:02:28.745 pcapng: explicitly disabled via build config 00:02:28.745 rawdev: explicitly disabled via build config 00:02:28.745 regexdev: explicitly disabled via build config 00:02:28.745 mldev: explicitly disabled via build config 00:02:28.745 rib: explicitly disabled via build config 00:02:28.745 sched: explicitly disabled via build config 00:02:28.745 stack: explicitly disabled via build config 00:02:28.745 ipsec: explicitly disabled via build config 00:02:28.745 pdcp: explicitly disabled via build config 00:02:28.745 fib: explicitly disabled via build config 00:02:28.745 port: explicitly disabled via build config 00:02:28.745 pdump: explicitly disabled via build config 00:02:28.745 table: explicitly disabled via build config 00:02:28.745 pipeline: explicitly disabled via build config 00:02:28.745 graph: explicitly disabled via build config 00:02:28.745 node: explicitly disabled via build config 00:02:28.745 00:02:28.745 drivers: 00:02:28.745 common/cpt: not in enabled drivers build config 00:02:28.745 common/dpaax: not in enabled drivers build config 00:02:28.745 common/iavf: not in enabled drivers build config 00:02:28.745 common/idpf: not in enabled drivers build config 00:02:28.745 common/ionic: not in enabled drivers build config 00:02:28.745 common/mvep: not in enabled drivers build config 00:02:28.745 common/octeontx: not in enabled drivers build config 00:02:28.745 bus/auxiliary: not in enabled drivers build config 00:02:28.745 bus/cdx: not in enabled drivers build config 00:02:28.745 bus/dpaa: not in enabled drivers build config 00:02:28.745 bus/fslmc: not in enabled drivers build config 00:02:28.745 bus/ifpga: not in enabled drivers build config 00:02:28.745 bus/platform: not in enabled drivers build config 00:02:28.745 bus/uacce: not in enabled drivers build config 00:02:28.745 bus/vmbus: not in enabled drivers build config 00:02:28.745 common/cnxk: not in enabled drivers build config 00:02:28.745 common/mlx5: not in enabled drivers build config 00:02:28.745 common/nfp: not in enabled drivers build config 00:02:28.746 common/nitrox: not in enabled drivers build config 00:02:28.746 common/qat: not in enabled drivers build config 00:02:28.746 common/sfc_efx: not in enabled drivers build config 00:02:28.746 mempool/bucket: not in enabled drivers build config 00:02:28.746 mempool/cnxk: not in enabled drivers build config 00:02:28.746 mempool/dpaa: not in enabled drivers build config 00:02:28.746 mempool/dpaa2: not in enabled drivers build config 00:02:28.746 mempool/octeontx: not in enabled drivers build config 00:02:28.746 mempool/stack: not in enabled drivers build config 00:02:28.746 dma/cnxk: not in enabled drivers build config 00:02:28.746 dma/dpaa: not in enabled drivers build config 00:02:28.746 dma/dpaa2: not in enabled drivers build config 00:02:28.746 dma/hisilicon: not in enabled drivers build config 00:02:28.746 dma/idxd: not in enabled drivers build config 00:02:28.746 dma/ioat: not in enabled drivers build config 00:02:28.746 dma/skeleton: not in enabled drivers build config 00:02:28.746 net/af_packet: not in enabled drivers build config 00:02:28.746 net/af_xdp: not in enabled drivers build config 00:02:28.746 net/ark: not in enabled drivers build config 00:02:28.746 net/atlantic: not in enabled drivers build config 00:02:28.746 net/avp: not in enabled drivers build config 00:02:28.746 net/axgbe: not in enabled drivers build config 00:02:28.746 net/bnx2x: not in enabled drivers build config 00:02:28.746 net/bnxt: not in enabled drivers build config 00:02:28.746 net/bonding: not in enabled drivers build config 00:02:28.746 net/cnxk: not in enabled drivers build config 00:02:28.746 net/cpfl: not in enabled drivers build config 00:02:28.746 net/cxgbe: not in enabled drivers build config 00:02:28.746 net/dpaa: not in enabled drivers build config 00:02:28.746 net/dpaa2: not in enabled drivers build config 00:02:28.746 net/e1000: not in enabled drivers build config 00:02:28.746 net/ena: not in enabled drivers build config 00:02:28.746 net/enetc: not in enabled drivers build config 00:02:28.746 net/enetfec: not in enabled drivers build config 00:02:28.746 net/enic: not in enabled drivers build config 00:02:28.746 net/failsafe: not in enabled drivers build config 00:02:28.746 net/fm10k: not in enabled drivers build config 00:02:28.746 net/gve: not in enabled drivers build config 00:02:28.746 net/hinic: not in enabled drivers build config 00:02:28.746 net/hns3: not in enabled drivers build config 00:02:28.746 net/i40e: not in enabled drivers build config 00:02:28.746 net/iavf: not in enabled drivers build config 00:02:28.746 net/ice: not in enabled drivers build config 00:02:28.746 net/idpf: not in enabled drivers build config 00:02:28.746 net/igc: not in enabled drivers build config 00:02:28.746 net/ionic: not in enabled drivers build config 00:02:28.746 net/ipn3ke: not in enabled drivers build config 00:02:28.746 net/ixgbe: not in enabled drivers build config 00:02:28.746 net/mana: not in enabled drivers build config 00:02:28.746 net/memif: not in enabled drivers build config 00:02:28.746 net/mlx4: not in enabled drivers build config 00:02:28.746 net/mlx5: not in enabled drivers build config 00:02:28.746 net/mvneta: not in enabled drivers build config 00:02:28.746 net/mvpp2: not in enabled drivers build config 00:02:28.746 net/netvsc: not in enabled drivers build config 00:02:28.746 net/nfb: not in enabled drivers build config 00:02:28.746 net/nfp: not in enabled drivers build config 00:02:28.746 net/ngbe: not in enabled drivers build config 00:02:28.746 net/null: not in enabled drivers build config 00:02:28.746 net/octeontx: not in enabled drivers build config 00:02:28.746 net/octeon_ep: not in enabled drivers build config 00:02:28.746 net/pcap: not in enabled drivers build config 00:02:28.746 net/pfe: not in enabled drivers build config 00:02:28.746 net/qede: not in enabled drivers build config 00:02:28.746 net/ring: not in enabled drivers build config 00:02:28.746 net/sfc: not in enabled drivers build config 00:02:28.746 net/softnic: not in enabled drivers build config 00:02:28.746 net/tap: not in enabled drivers build config 00:02:28.746 net/thunderx: not in enabled drivers build config 00:02:28.746 net/txgbe: not in enabled drivers build config 00:02:28.746 net/vdev_netvsc: not in enabled drivers build config 00:02:28.746 net/vhost: not in enabled drivers build config 00:02:28.746 net/virtio: not in enabled drivers build config 00:02:28.746 net/vmxnet3: not in enabled drivers build config 00:02:28.746 raw/*: missing internal dependency, "rawdev" 00:02:28.746 crypto/armv8: not in enabled drivers build config 00:02:28.746 crypto/bcmfs: not in enabled drivers build config 00:02:28.746 crypto/caam_jr: not in enabled drivers build config 00:02:28.746 crypto/ccp: not in enabled drivers build config 00:02:28.746 crypto/cnxk: not in enabled drivers build config 00:02:28.746 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.746 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.746 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.746 crypto/mlx5: not in enabled drivers build config 00:02:28.746 crypto/mvsam: not in enabled drivers build config 00:02:28.746 crypto/nitrox: not in enabled drivers build config 00:02:28.746 crypto/null: not in enabled drivers build config 00:02:28.746 crypto/octeontx: not in enabled drivers build config 00:02:28.746 crypto/openssl: not in enabled drivers build config 00:02:28.746 crypto/scheduler: not in enabled drivers build config 00:02:28.746 crypto/uadk: not in enabled drivers build config 00:02:28.746 crypto/virtio: not in enabled drivers build config 00:02:28.746 compress/isal: not in enabled drivers build config 00:02:28.746 compress/mlx5: not in enabled drivers build config 00:02:28.746 compress/nitrox: not in enabled drivers build config 00:02:28.746 compress/octeontx: not in enabled drivers build config 00:02:28.746 compress/zlib: not in enabled drivers build config 00:02:28.746 regex/*: missing internal dependency, "regexdev" 00:02:28.746 ml/*: missing internal dependency, "mldev" 00:02:28.746 vdpa/ifc: not in enabled drivers build config 00:02:28.746 vdpa/mlx5: not in enabled drivers build config 00:02:28.746 vdpa/nfp: not in enabled drivers build config 00:02:28.746 vdpa/sfc: not in enabled drivers build config 00:02:28.746 event/*: missing internal dependency, "eventdev" 00:02:28.746 baseband/*: missing internal dependency, "bbdev" 00:02:28.746 gpu/*: missing internal dependency, "gpudev" 00:02:28.746 00:02:28.746 00:02:28.746 Build targets in project: 85 00:02:28.746 00:02:28.746 DPDK 24.03.0 00:02:28.746 00:02:28.746 User defined options 00:02:28.746 buildtype : debug 00:02:28.746 default_library : shared 00:02:28.746 libdir : lib 00:02:28.746 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.746 b_sanitize : address 00:02:28.746 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:28.746 c_link_args : 00:02:28.746 cpu_instruction_set: native 00:02:28.746 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:28.747 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:28.747 enable_docs : false 00:02:28.747 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:28.747 enable_kmods : false 00:02:28.747 max_lcores : 128 00:02:28.747 tests : false 00:02:28.747 00:02:28.747 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.329 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:29.329 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.329 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.329 [3/268] Linking static target lib/librte_kvargs.a 00:02:29.329 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.329 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.329 [6/268] Linking static target lib/librte_log.a 00:02:29.896 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.896 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.896 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.896 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.896 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.896 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.154 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.154 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.154 [15/268] Linking static target lib/librte_telemetry.a 00:02:30.154 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.154 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.154 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.411 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.411 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.669 [21/268] Linking target lib/librte_log.so.24.1 00:02:30.669 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.927 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.928 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.928 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.928 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.928 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.928 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.928 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.928 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:31.185 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.185 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:31.185 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.185 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:31.442 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.442 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.442 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.442 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.442 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.442 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:31.442 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.442 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.700 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.700 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.957 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.957 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.957 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.957 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.215 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.215 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.215 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.472 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:32.472 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.472 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:32.472 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:32.729 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:32.729 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:32.729 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:32.985 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:32.985 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:32.985 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:32.985 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:32.985 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.243 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.243 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.243 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.501 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:33.501 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:33.501 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:33.761 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:33.761 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:33.761 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:33.761 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:33.761 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:33.761 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.020 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.020 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:34.020 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:34.020 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.278 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:34.536 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:34.536 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:34.536 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:34.536 [84/268] Linking static target lib/librte_ring.a 00:02:34.536 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:34.795 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:34.795 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.053 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.053 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.053 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.311 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.311 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.311 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.311 [94/268] Linking static target lib/librte_rcu.a 00:02:35.311 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.311 [96/268] Linking static target lib/librte_eal.a 00:02:35.879 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.879 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:35.879 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:35.879 [100/268] Linking static target lib/librte_mempool.a 00:02:35.879 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:35.879 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:35.879 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.137 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.137 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:36.396 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:36.396 [107/268] Linking static target lib/librte_meter.a 00:02:36.654 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.654 [109/268] Linking static target lib/librte_mbuf.a 00:02:36.654 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.654 [111/268] Linking static target lib/librte_net.a 00:02:36.912 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:36.912 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:36.913 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.913 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.171 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.171 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.429 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.994 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:37.995 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:37.995 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.995 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.561 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.819 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.819 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.819 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.819 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.819 [128/268] Linking static target lib/librte_pci.a 00:02:38.819 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.077 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.077 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.077 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.077 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.393 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.393 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.393 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.393 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.393 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.393 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.393 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.393 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.393 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.393 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.393 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.651 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.651 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.651 [147/268] Linking static target lib/librte_cmdline.a 00:02:39.909 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.168 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.168 [150/268] Linking static target lib/librte_timer.a 00:02:40.168 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.168 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.427 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.427 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.427 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.993 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.993 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.993 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.249 [159/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.249 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:41.249 [161/268] Linking static target lib/librte_compressdev.a 00:02:41.250 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.507 [163/268] Linking static target lib/librte_ethdev.a 00:02:41.763 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.763 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.763 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.763 [167/268] Linking static target lib/librte_dmadev.a 00:02:41.763 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.763 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.020 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.020 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.020 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.020 [173/268] Linking static target lib/librte_hash.a 00:02:42.705 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.705 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.705 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.705 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.705 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.964 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.964 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.964 [181/268] Linking static target lib/librte_cryptodev.a 00:02:42.964 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.964 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.528 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.528 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:43.528 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.528 [187/268] Linking static target lib/librte_power.a 00:02:43.528 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.528 [189/268] Linking static target lib/librte_reorder.a 00:02:43.785 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.785 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.347 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.347 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.347 [194/268] Linking static target lib/librte_security.a 00:02:44.605 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.864 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.120 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.120 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:45.376 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.376 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:45.376 [201/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.633 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.633 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.970 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.970 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.970 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.228 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.228 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:46.228 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:46.485 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.485 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:46.485 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:46.485 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.743 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:46.743 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.743 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:46.743 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:46.743 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:46.743 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.743 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.001 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:47.001 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.001 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.001 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.001 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:47.001 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.567 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.567 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.939 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.198 [230/268] Linking target lib/librte_eal.so.24.1 00:02:49.198 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:49.490 [232/268] Linking target lib/librte_ring.so.24.1 00:02:49.490 [233/268] Linking target lib/librte_dmadev.so.24.1 00:02:49.490 [234/268] Linking target lib/librte_meter.so.24.1 00:02:49.490 [235/268] Linking target lib/librte_timer.so.24.1 00:02:49.490 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:49.490 [237/268] Linking target lib/librte_pci.so.24.1 00:02:49.490 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:49.490 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:49.490 [240/268] Linking target lib/librte_rcu.so.24.1 00:02:49.490 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:49.490 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:49.490 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:49.490 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:49.753 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:49.753 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:49.753 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:50.012 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:50.012 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:50.012 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:50.012 [251/268] Linking target lib/librte_net.so.24.1 00:02:50.012 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:50.012 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:50.012 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:50.270 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:50.270 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:50.270 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:50.528 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.528 [259/268] Linking target lib/librte_hash.so.24.1 00:02:50.528 [260/268] Linking target lib/librte_security.so.24.1 00:02:50.528 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:50.528 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:50.785 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:50.785 [264/268] Linking target lib/librte_power.so.24.1 00:02:53.310 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.310 [266/268] Linking static target lib/librte_vhost.a 00:02:55.204 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.204 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:55.204 INFO: autodetecting backend as ninja 00:02:55.204 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:56.132 CC lib/ut/ut.o 00:02:56.132 CC lib/ut_mock/mock.o 00:02:56.132 CC lib/log/log.o 00:02:56.132 CC lib/log/log_deprecated.o 00:02:56.132 CC lib/log/log_flags.o 00:02:56.388 LIB libspdk_ut.a 00:02:56.388 LIB libspdk_log.a 00:02:56.388 LIB libspdk_ut_mock.a 00:02:56.388 SO libspdk_ut.so.2.0 00:02:56.388 SO libspdk_ut_mock.so.6.0 00:02:56.388 SO libspdk_log.so.7.0 00:02:56.388 SYMLINK libspdk_ut_mock.so 00:02:56.388 SYMLINK libspdk_ut.so 00:02:56.645 SYMLINK libspdk_log.so 00:02:56.645 CC lib/util/bit_array.o 00:02:56.645 CC lib/util/base64.o 00:02:56.645 CC lib/util/cpuset.o 00:02:56.645 CC lib/util/crc16.o 00:02:56.646 CC lib/util/crc32.o 00:02:56.646 CC lib/util/crc32c.o 00:02:56.646 CXX lib/trace_parser/trace.o 00:02:56.646 CC lib/dma/dma.o 00:02:56.646 CC lib/ioat/ioat.o 00:02:56.904 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.904 CC lib/util/crc32_ieee.o 00:02:56.904 CC lib/vfio_user/host/vfio_user.o 00:02:56.904 CC lib/util/crc64.o 00:02:56.904 CC lib/util/dif.o 00:02:56.904 CC lib/util/fd.o 00:02:57.180 CC lib/util/file.o 00:02:57.180 LIB libspdk_dma.a 00:02:57.180 CC lib/util/hexlify.o 00:02:57.180 SO libspdk_dma.so.4.0 00:02:57.180 CC lib/util/iov.o 00:02:57.180 CC lib/util/math.o 00:02:57.180 SYMLINK libspdk_dma.so 00:02:57.180 CC lib/util/pipe.o 00:02:57.180 CC lib/util/strerror_tls.o 00:02:57.180 CC lib/util/string.o 00:02:57.180 LIB libspdk_ioat.a 00:02:57.180 LIB libspdk_vfio_user.a 00:02:57.180 CC lib/util/uuid.o 00:02:57.180 SO libspdk_ioat.so.7.0 00:02:57.180 SO libspdk_vfio_user.so.5.0 00:02:57.180 SYMLINK libspdk_ioat.so 00:02:57.438 CC lib/util/fd_group.o 00:02:57.438 CC lib/util/xor.o 00:02:57.438 SYMLINK libspdk_vfio_user.so 00:02:57.438 CC lib/util/zipf.o 00:02:57.697 LIB libspdk_util.a 00:02:57.955 SO libspdk_util.so.9.1 00:02:57.955 LIB libspdk_trace_parser.a 00:02:57.955 SO libspdk_trace_parser.so.5.0 00:02:58.213 SYMLINK libspdk_util.so 00:02:58.213 SYMLINK libspdk_trace_parser.so 00:02:58.213 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:58.213 CC lib/rdma_provider/common.o 00:02:58.213 CC lib/vmd/led.o 00:02:58.213 CC lib/vmd/vmd.o 00:02:58.213 CC lib/env_dpdk/env.o 00:02:58.213 CC lib/env_dpdk/memory.o 00:02:58.213 CC lib/rdma_utils/rdma_utils.o 00:02:58.213 CC lib/conf/conf.o 00:02:58.213 CC lib/json/json_parse.o 00:02:58.213 CC lib/idxd/idxd.o 00:02:58.472 CC lib/env_dpdk/pci.o 00:02:58.472 CC lib/env_dpdk/init.o 00:02:58.472 LIB libspdk_rdma_provider.a 00:02:58.472 SO libspdk_rdma_provider.so.6.0 00:02:58.472 LIB libspdk_conf.a 00:02:58.472 SO libspdk_conf.so.6.0 00:02:58.472 LIB libspdk_rdma_utils.a 00:02:58.472 CC lib/json/json_util.o 00:02:58.730 SYMLINK libspdk_rdma_provider.so 00:02:58.730 CC lib/json/json_write.o 00:02:58.730 SO libspdk_rdma_utils.so.1.0 00:02:58.730 SYMLINK libspdk_conf.so 00:02:58.730 CC lib/env_dpdk/threads.o 00:02:58.730 SYMLINK libspdk_rdma_utils.so 00:02:58.730 CC lib/idxd/idxd_user.o 00:02:58.730 CC lib/idxd/idxd_kernel.o 00:02:58.989 CC lib/env_dpdk/pci_ioat.o 00:02:58.989 CC lib/env_dpdk/pci_virtio.o 00:02:58.989 LIB libspdk_json.a 00:02:58.989 CC lib/env_dpdk/pci_vmd.o 00:02:58.989 SO libspdk_json.so.6.0 00:02:58.989 CC lib/env_dpdk/pci_idxd.o 00:02:58.989 CC lib/env_dpdk/pci_event.o 00:02:58.989 LIB libspdk_idxd.a 00:02:58.989 SYMLINK libspdk_json.so 00:02:58.989 CC lib/env_dpdk/sigbus_handler.o 00:02:58.989 LIB libspdk_vmd.a 00:02:58.989 SO libspdk_idxd.so.12.0 00:02:58.989 CC lib/env_dpdk/pci_dpdk.o 00:02:59.248 SO libspdk_vmd.so.6.0 00:02:59.248 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:59.248 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:59.248 SYMLINK libspdk_idxd.so 00:02:59.248 SYMLINK libspdk_vmd.so 00:02:59.248 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.248 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.248 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.248 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:59.506 LIB libspdk_jsonrpc.a 00:02:59.764 SO libspdk_jsonrpc.so.6.0 00:02:59.764 SYMLINK libspdk_jsonrpc.so 00:03:00.022 CC lib/rpc/rpc.o 00:03:00.279 LIB libspdk_env_dpdk.a 00:03:00.279 LIB libspdk_rpc.a 00:03:00.279 SO libspdk_env_dpdk.so.14.1 00:03:00.279 SO libspdk_rpc.so.6.0 00:03:00.279 SYMLINK libspdk_rpc.so 00:03:00.537 SYMLINK libspdk_env_dpdk.so 00:03:00.537 CC lib/trace/trace_flags.o 00:03:00.537 CC lib/trace/trace.o 00:03:00.537 CC lib/trace/trace_rpc.o 00:03:00.537 CC lib/notify/notify.o 00:03:00.537 CC lib/keyring/keyring.o 00:03:00.537 CC lib/notify/notify_rpc.o 00:03:00.537 CC lib/keyring/keyring_rpc.o 00:03:00.795 LIB libspdk_notify.a 00:03:00.795 SO libspdk_notify.so.6.0 00:03:01.051 SYMLINK libspdk_notify.so 00:03:01.051 LIB libspdk_trace.a 00:03:01.051 SO libspdk_trace.so.10.0 00:03:01.051 LIB libspdk_keyring.a 00:03:01.051 SO libspdk_keyring.so.1.0 00:03:01.051 SYMLINK libspdk_trace.so 00:03:01.308 SYMLINK libspdk_keyring.so 00:03:01.308 CC lib/thread/thread.o 00:03:01.308 CC lib/thread/iobuf.o 00:03:01.308 CC lib/sock/sock.o 00:03:01.308 CC lib/sock/sock_rpc.o 00:03:01.889 LIB libspdk_sock.a 00:03:01.889 SO libspdk_sock.so.10.0 00:03:01.889 SYMLINK libspdk_sock.so 00:03:02.146 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.146 CC lib/nvme/nvme_ctrlr.o 00:03:02.146 CC lib/nvme/nvme_ns_cmd.o 00:03:02.146 CC lib/nvme/nvme_pcie_common.o 00:03:02.146 CC lib/nvme/nvme_fabric.o 00:03:02.146 CC lib/nvme/nvme_ns.o 00:03:02.146 CC lib/nvme/nvme_pcie.o 00:03:02.146 CC lib/nvme/nvme_qpair.o 00:03:02.146 CC lib/nvme/nvme.o 00:03:03.175 CC lib/nvme/nvme_quirks.o 00:03:03.175 CC lib/nvme/nvme_transport.o 00:03:03.175 CC lib/nvme/nvme_discovery.o 00:03:03.175 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.175 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.433 LIB libspdk_thread.a 00:03:03.433 CC lib/nvme/nvme_tcp.o 00:03:03.433 SO libspdk_thread.so.10.1 00:03:03.433 CC lib/nvme/nvme_opal.o 00:03:03.433 SYMLINK libspdk_thread.so 00:03:03.433 CC lib/nvme/nvme_io_msg.o 00:03:03.691 CC lib/nvme/nvme_poll_group.o 00:03:03.691 CC lib/nvme/nvme_zns.o 00:03:03.691 CC lib/nvme/nvme_stubs.o 00:03:03.691 CC lib/nvme/nvme_auth.o 00:03:03.948 CC lib/nvme/nvme_cuse.o 00:03:03.948 CC lib/nvme/nvme_rdma.o 00:03:04.512 CC lib/accel/accel.o 00:03:04.512 CC lib/accel/accel_rpc.o 00:03:04.512 CC lib/blob/blobstore.o 00:03:04.512 CC lib/init/json_config.o 00:03:04.769 CC lib/init/subsystem.o 00:03:04.769 CC lib/virtio/virtio.o 00:03:04.769 CC lib/virtio/virtio_vhost_user.o 00:03:05.026 CC lib/blob/request.o 00:03:05.026 CC lib/blob/zeroes.o 00:03:05.026 CC lib/blob/blob_bs_dev.o 00:03:05.026 CC lib/init/subsystem_rpc.o 00:03:05.026 CC lib/init/rpc.o 00:03:05.027 CC lib/virtio/virtio_vfio_user.o 00:03:05.286 CC lib/virtio/virtio_pci.o 00:03:05.286 LIB libspdk_init.a 00:03:05.286 CC lib/accel/accel_sw.o 00:03:05.286 SO libspdk_init.so.5.0 00:03:05.554 SYMLINK libspdk_init.so 00:03:05.554 CC lib/event/app.o 00:03:05.554 CC lib/event/app_rpc.o 00:03:05.554 CC lib/event/log_rpc.o 00:03:05.554 CC lib/event/reactor.o 00:03:05.554 CC lib/event/scheduler_static.o 00:03:05.554 LIB libspdk_virtio.a 00:03:05.812 LIB libspdk_accel.a 00:03:05.812 SO libspdk_virtio.so.7.0 00:03:05.812 SO libspdk_accel.so.15.1 00:03:05.812 SYMLINK libspdk_virtio.so 00:03:05.812 SYMLINK libspdk_accel.so 00:03:06.068 CC lib/bdev/bdev.o 00:03:06.068 CC lib/bdev/bdev_rpc.o 00:03:06.068 CC lib/bdev/bdev_zone.o 00:03:06.068 CC lib/bdev/scsi_nvme.o 00:03:06.068 CC lib/bdev/part.o 00:03:06.325 LIB libspdk_nvme.a 00:03:06.325 LIB libspdk_event.a 00:03:06.325 SO libspdk_event.so.14.0 00:03:06.582 SO libspdk_nvme.so.13.1 00:03:06.582 SYMLINK libspdk_event.so 00:03:06.839 SYMLINK libspdk_nvme.so 00:03:08.736 LIB libspdk_blob.a 00:03:08.993 SO libspdk_blob.so.11.0 00:03:08.993 SYMLINK libspdk_blob.so 00:03:09.249 CC lib/blobfs/tree.o 00:03:09.249 CC lib/blobfs/blobfs.o 00:03:09.249 CC lib/lvol/lvol.o 00:03:09.506 LIB libspdk_bdev.a 00:03:09.506 SO libspdk_bdev.so.15.1 00:03:09.763 SYMLINK libspdk_bdev.so 00:03:10.020 CC lib/nvmf/ctrlr.o 00:03:10.020 CC lib/scsi/dev.o 00:03:10.020 CC lib/scsi/lun.o 00:03:10.020 CC lib/nvmf/ctrlr_discovery.o 00:03:10.020 CC lib/nvmf/ctrlr_bdev.o 00:03:10.020 CC lib/ftl/ftl_core.o 00:03:10.020 CC lib/nbd/nbd.o 00:03:10.020 CC lib/ublk/ublk.o 00:03:10.276 CC lib/ublk/ublk_rpc.o 00:03:10.276 CC lib/scsi/port.o 00:03:10.533 CC lib/nbd/nbd_rpc.o 00:03:10.533 CC lib/scsi/scsi.o 00:03:10.533 LIB libspdk_lvol.a 00:03:10.533 SO libspdk_lvol.so.10.0 00:03:10.790 CC lib/ftl/ftl_init.o 00:03:10.790 CC lib/scsi/scsi_bdev.o 00:03:10.790 CC lib/scsi/scsi_pr.o 00:03:10.790 SYMLINK libspdk_lvol.so 00:03:10.790 CC lib/scsi/scsi_rpc.o 00:03:10.790 CC lib/scsi/task.o 00:03:10.790 LIB libspdk_nbd.a 00:03:10.790 LIB libspdk_blobfs.a 00:03:10.790 SO libspdk_nbd.so.7.0 00:03:10.790 LIB libspdk_ublk.a 00:03:11.047 SO libspdk_blobfs.so.10.0 00:03:11.047 CC lib/ftl/ftl_layout.o 00:03:11.047 SO libspdk_ublk.so.3.0 00:03:11.047 SYMLINK libspdk_nbd.so 00:03:11.047 CC lib/nvmf/subsystem.o 00:03:11.047 CC lib/nvmf/nvmf.o 00:03:11.047 SYMLINK libspdk_ublk.so 00:03:11.047 CC lib/nvmf/nvmf_rpc.o 00:03:11.047 SYMLINK libspdk_blobfs.so 00:03:11.047 CC lib/nvmf/transport.o 00:03:11.047 CC lib/nvmf/tcp.o 00:03:11.303 CC lib/ftl/ftl_debug.o 00:03:11.303 CC lib/ftl/ftl_io.o 00:03:11.569 CC lib/nvmf/stubs.o 00:03:11.569 LIB libspdk_scsi.a 00:03:11.569 CC lib/nvmf/mdns_server.o 00:03:11.569 SO libspdk_scsi.so.9.0 00:03:11.827 SYMLINK libspdk_scsi.so 00:03:11.827 CC lib/nvmf/rdma.o 00:03:11.827 CC lib/ftl/ftl_sb.o 00:03:12.390 CC lib/ftl/ftl_l2p.o 00:03:12.390 CC lib/nvmf/auth.o 00:03:12.390 CC lib/ftl/ftl_l2p_flat.o 00:03:12.390 CC lib/iscsi/conn.o 00:03:12.390 CC lib/ftl/ftl_nv_cache.o 00:03:12.647 CC lib/ftl/ftl_band.o 00:03:12.647 CC lib/vhost/vhost.o 00:03:12.647 CC lib/ftl/ftl_band_ops.o 00:03:12.647 CC lib/ftl/ftl_writer.o 00:03:12.906 CC lib/ftl/ftl_rq.o 00:03:12.906 CC lib/iscsi/init_grp.o 00:03:13.164 CC lib/iscsi/iscsi.o 00:03:13.421 CC lib/iscsi/md5.o 00:03:13.421 CC lib/iscsi/param.o 00:03:13.421 CC lib/vhost/vhost_rpc.o 00:03:13.421 CC lib/ftl/ftl_reloc.o 00:03:13.679 CC lib/iscsi/portal_grp.o 00:03:13.679 CC lib/iscsi/tgt_node.o 00:03:13.679 CC lib/iscsi/iscsi_subsystem.o 00:03:13.679 CC lib/vhost/vhost_scsi.o 00:03:13.937 CC lib/vhost/vhost_blk.o 00:03:13.937 CC lib/ftl/ftl_l2p_cache.o 00:03:13.937 CC lib/ftl/ftl_p2l.o 00:03:13.937 CC lib/iscsi/iscsi_rpc.o 00:03:14.193 CC lib/iscsi/task.o 00:03:14.194 CC lib/ftl/mngt/ftl_mngt.o 00:03:14.540 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:14.540 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:14.540 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.540 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.797 CC lib/vhost/rte_vhost_user.o 00:03:14.797 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:14.797 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:14.797 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:14.797 LIB libspdk_nvmf.a 00:03:14.797 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:14.797 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.054 LIB libspdk_iscsi.a 00:03:15.054 CC lib/ftl/utils/ftl_conf.o 00:03:15.054 SO libspdk_nvmf.so.19.0 00:03:15.054 CC lib/ftl/utils/ftl_md.o 00:03:15.054 CC lib/ftl/utils/ftl_mempool.o 00:03:15.054 SO libspdk_iscsi.so.8.0 00:03:15.311 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.311 CC lib/ftl/utils/ftl_property.o 00:03:15.311 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.311 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.311 SYMLINK libspdk_nvmf.so 00:03:15.311 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.311 SYMLINK libspdk_iscsi.so 00:03:15.311 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.311 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.568 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.568 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.568 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.568 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.568 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.568 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.568 CC lib/ftl/base/ftl_base_dev.o 00:03:15.568 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.826 CC lib/ftl/ftl_trace.o 00:03:16.084 LIB libspdk_ftl.a 00:03:16.084 LIB libspdk_vhost.a 00:03:16.341 SO libspdk_vhost.so.8.0 00:03:16.341 SO libspdk_ftl.so.9.0 00:03:16.341 SYMLINK libspdk_vhost.so 00:03:16.904 SYMLINK libspdk_ftl.so 00:03:17.161 CC module/env_dpdk/env_dpdk_rpc.o 00:03:17.161 CC module/scheduler/gscheduler/gscheduler.o 00:03:17.161 CC module/accel/ioat/accel_ioat.o 00:03:17.161 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:17.161 CC module/blob/bdev/blob_bdev.o 00:03:17.161 CC module/accel/dsa/accel_dsa.o 00:03:17.161 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:17.161 CC module/accel/error/accel_error.o 00:03:17.161 CC module/keyring/file/keyring.o 00:03:17.161 CC module/sock/posix/posix.o 00:03:17.161 LIB libspdk_env_dpdk_rpc.a 00:03:17.418 SO libspdk_env_dpdk_rpc.so.6.0 00:03:17.418 LIB libspdk_scheduler_gscheduler.a 00:03:17.418 CC module/keyring/file/keyring_rpc.o 00:03:17.418 SO libspdk_scheduler_gscheduler.so.4.0 00:03:17.418 CC module/accel/error/accel_error_rpc.o 00:03:17.418 SYMLINK libspdk_env_dpdk_rpc.so 00:03:17.418 CC module/accel/dsa/accel_dsa_rpc.o 00:03:17.418 CC module/accel/ioat/accel_ioat_rpc.o 00:03:17.418 LIB libspdk_scheduler_dynamic.a 00:03:17.418 SO libspdk_scheduler_dynamic.so.4.0 00:03:17.418 LIB libspdk_scheduler_dpdk_governor.a 00:03:17.418 LIB libspdk_blob_bdev.a 00:03:17.418 SYMLINK libspdk_scheduler_gscheduler.so 00:03:17.675 LIB libspdk_keyring_file.a 00:03:17.675 SYMLINK libspdk_scheduler_dynamic.so 00:03:17.675 SO libspdk_blob_bdev.so.11.0 00:03:17.675 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:17.675 SO libspdk_keyring_file.so.1.0 00:03:17.675 LIB libspdk_accel_error.a 00:03:17.675 LIB libspdk_accel_ioat.a 00:03:17.675 LIB libspdk_accel_dsa.a 00:03:17.675 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:17.675 SYMLINK libspdk_blob_bdev.so 00:03:17.675 SO libspdk_accel_error.so.2.0 00:03:17.675 SYMLINK libspdk_keyring_file.so 00:03:17.675 SO libspdk_accel_ioat.so.6.0 00:03:17.675 SO libspdk_accel_dsa.so.5.0 00:03:17.675 SYMLINK libspdk_accel_ioat.so 00:03:17.675 SYMLINK libspdk_accel_dsa.so 00:03:17.675 SYMLINK libspdk_accel_error.so 00:03:17.932 CC module/accel/iaa/accel_iaa.o 00:03:17.932 CC module/keyring/linux/keyring.o 00:03:17.932 CC module/bdev/delay/vbdev_delay.o 00:03:17.932 CC module/bdev/lvol/vbdev_lvol.o 00:03:17.932 CC module/bdev/error/vbdev_error.o 00:03:17.932 CC module/bdev/gpt/gpt.o 00:03:17.932 CC module/blobfs/bdev/blobfs_bdev.o 00:03:17.932 CC module/bdev/malloc/bdev_malloc.o 00:03:17.932 CC module/keyring/linux/keyring_rpc.o 00:03:17.932 CC module/bdev/null/bdev_null.o 00:03:17.932 CC module/accel/iaa/accel_iaa_rpc.o 00:03:18.190 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:18.190 LIB libspdk_keyring_linux.a 00:03:18.448 LIB libspdk_sock_posix.a 00:03:18.448 CC module/bdev/error/vbdev_error_rpc.o 00:03:18.448 LIB libspdk_accel_iaa.a 00:03:18.448 SO libspdk_keyring_linux.so.1.0 00:03:18.448 SO libspdk_sock_posix.so.6.0 00:03:18.448 CC module/bdev/gpt/vbdev_gpt.o 00:03:18.448 CC module/bdev/null/bdev_null_rpc.o 00:03:18.448 SO libspdk_accel_iaa.so.3.0 00:03:18.448 LIB libspdk_blobfs_bdev.a 00:03:18.448 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:18.448 SYMLINK libspdk_keyring_linux.so 00:03:18.448 SYMLINK libspdk_sock_posix.so 00:03:18.448 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:18.448 SO libspdk_blobfs_bdev.so.6.0 00:03:18.448 SYMLINK libspdk_accel_iaa.so 00:03:18.448 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:18.448 LIB libspdk_bdev_error.a 00:03:18.448 SYMLINK libspdk_blobfs_bdev.so 00:03:18.705 SO libspdk_bdev_error.so.6.0 00:03:18.705 LIB libspdk_bdev_null.a 00:03:18.705 LIB libspdk_bdev_delay.a 00:03:18.705 SO libspdk_bdev_null.so.6.0 00:03:18.705 SO libspdk_bdev_delay.so.6.0 00:03:18.705 LIB libspdk_bdev_malloc.a 00:03:18.705 CC module/bdev/passthru/vbdev_passthru.o 00:03:18.705 CC module/bdev/nvme/bdev_nvme.o 00:03:18.705 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:18.705 SYMLINK libspdk_bdev_error.so 00:03:18.705 CC module/bdev/nvme/nvme_rpc.o 00:03:18.705 SO libspdk_bdev_malloc.so.6.0 00:03:18.705 CC module/bdev/raid/bdev_raid.o 00:03:18.705 SYMLINK libspdk_bdev_null.so 00:03:18.705 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.705 SYMLINK libspdk_bdev_delay.so 00:03:18.705 CC module/bdev/nvme/vbdev_opal.o 00:03:18.963 SYMLINK libspdk_bdev_malloc.so 00:03:18.963 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.963 LIB libspdk_bdev_gpt.a 00:03:18.963 SO libspdk_bdev_gpt.so.6.0 00:03:18.963 LIB libspdk_bdev_lvol.a 00:03:18.963 SO libspdk_bdev_lvol.so.6.0 00:03:19.220 SYMLINK libspdk_bdev_gpt.so 00:03:19.220 CC module/bdev/raid/bdev_raid_rpc.o 00:03:19.220 CC module/bdev/raid/bdev_raid_sb.o 00:03:19.220 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.220 SYMLINK libspdk_bdev_lvol.so 00:03:19.220 CC module/bdev/raid/raid0.o 00:03:19.220 CC module/bdev/split/vbdev_split.o 00:03:19.220 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:19.220 CC module/bdev/xnvme/bdev_xnvme.o 00:03:19.220 LIB libspdk_bdev_passthru.a 00:03:19.476 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:19.476 SO libspdk_bdev_passthru.so.6.0 00:03:19.476 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:19.477 SYMLINK libspdk_bdev_passthru.so 00:03:19.477 CC module/bdev/split/vbdev_split_rpc.o 00:03:19.477 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:19.477 LIB libspdk_bdev_xnvme.a 00:03:19.735 LIB libspdk_bdev_split.a 00:03:19.735 CC module/bdev/raid/raid1.o 00:03:19.735 SO libspdk_bdev_xnvme.so.3.0 00:03:19.735 CC module/bdev/aio/bdev_aio.o 00:03:19.735 SO libspdk_bdev_split.so.6.0 00:03:19.735 CC module/bdev/aio/bdev_aio_rpc.o 00:03:19.735 CC module/bdev/ftl/bdev_ftl.o 00:03:19.735 CC module/bdev/iscsi/bdev_iscsi.o 00:03:19.735 LIB libspdk_bdev_zone_block.a 00:03:19.735 SYMLINK libspdk_bdev_xnvme.so 00:03:19.735 SO libspdk_bdev_zone_block.so.6.0 00:03:19.735 SYMLINK libspdk_bdev_split.so 00:03:19.735 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:19.735 SYMLINK libspdk_bdev_zone_block.so 00:03:19.735 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:19.992 CC module/bdev/raid/concat.o 00:03:19.992 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:19.992 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:19.992 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:19.992 LIB libspdk_bdev_ftl.a 00:03:19.992 LIB libspdk_bdev_aio.a 00:03:19.992 SO libspdk_bdev_ftl.so.6.0 00:03:20.249 SO libspdk_bdev_aio.so.6.0 00:03:20.249 SYMLINK libspdk_bdev_ftl.so 00:03:20.249 LIB libspdk_bdev_raid.a 00:03:20.249 SYMLINK libspdk_bdev_aio.so 00:03:20.249 LIB libspdk_bdev_iscsi.a 00:03:20.249 SO libspdk_bdev_raid.so.6.0 00:03:20.249 SO libspdk_bdev_iscsi.so.6.0 00:03:20.506 SYMLINK libspdk_bdev_iscsi.so 00:03:20.506 SYMLINK libspdk_bdev_raid.so 00:03:20.764 LIB libspdk_bdev_virtio.a 00:03:20.764 SO libspdk_bdev_virtio.so.6.0 00:03:21.021 SYMLINK libspdk_bdev_virtio.so 00:03:21.586 LIB libspdk_bdev_nvme.a 00:03:21.586 SO libspdk_bdev_nvme.so.7.0 00:03:21.844 SYMLINK libspdk_bdev_nvme.so 00:03:22.410 CC module/event/subsystems/sock/sock.o 00:03:22.410 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:22.410 CC module/event/subsystems/keyring/keyring.o 00:03:22.410 CC module/event/subsystems/iobuf/iobuf.o 00:03:22.410 CC module/event/subsystems/vmd/vmd.o 00:03:22.410 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:22.410 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:22.410 CC module/event/subsystems/scheduler/scheduler.o 00:03:22.410 LIB libspdk_event_vhost_blk.a 00:03:22.666 LIB libspdk_event_keyring.a 00:03:22.666 SO libspdk_event_vhost_blk.so.3.0 00:03:22.666 LIB libspdk_event_scheduler.a 00:03:22.666 LIB libspdk_event_vmd.a 00:03:22.666 LIB libspdk_event_iobuf.a 00:03:22.666 SO libspdk_event_keyring.so.1.0 00:03:22.666 LIB libspdk_event_sock.a 00:03:22.666 SO libspdk_event_scheduler.so.4.0 00:03:22.666 SO libspdk_event_iobuf.so.3.0 00:03:22.666 SO libspdk_event_vmd.so.6.0 00:03:22.666 SYMLINK libspdk_event_vhost_blk.so 00:03:22.666 SO libspdk_event_sock.so.5.0 00:03:22.666 SYMLINK libspdk_event_keyring.so 00:03:22.666 SYMLINK libspdk_event_vmd.so 00:03:22.666 SYMLINK libspdk_event_iobuf.so 00:03:22.666 SYMLINK libspdk_event_scheduler.so 00:03:22.666 SYMLINK libspdk_event_sock.so 00:03:22.937 CC module/event/subsystems/accel/accel.o 00:03:23.193 LIB libspdk_event_accel.a 00:03:23.193 SO libspdk_event_accel.so.6.0 00:03:23.193 SYMLINK libspdk_event_accel.so 00:03:23.758 CC module/event/subsystems/bdev/bdev.o 00:03:23.758 LIB libspdk_event_bdev.a 00:03:23.758 SO libspdk_event_bdev.so.6.0 00:03:24.015 SYMLINK libspdk_event_bdev.so 00:03:24.272 CC module/event/subsystems/scsi/scsi.o 00:03:24.272 CC module/event/subsystems/nbd/nbd.o 00:03:24.272 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.272 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:24.272 CC module/event/subsystems/ublk/ublk.o 00:03:24.272 LIB libspdk_event_nbd.a 00:03:24.272 SO libspdk_event_nbd.so.6.0 00:03:24.530 LIB libspdk_event_ublk.a 00:03:24.530 SYMLINK libspdk_event_nbd.so 00:03:24.530 LIB libspdk_event_scsi.a 00:03:24.530 SO libspdk_event_ublk.so.3.0 00:03:24.530 SO libspdk_event_scsi.so.6.0 00:03:24.530 LIB libspdk_event_nvmf.a 00:03:24.530 SYMLINK libspdk_event_ublk.so 00:03:24.530 SO libspdk_event_nvmf.so.6.0 00:03:24.530 SYMLINK libspdk_event_scsi.so 00:03:24.530 SYMLINK libspdk_event_nvmf.so 00:03:24.788 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.788 CC module/event/subsystems/iscsi/iscsi.o 00:03:25.046 LIB libspdk_event_iscsi.a 00:03:25.046 LIB libspdk_event_vhost_scsi.a 00:03:25.046 SO libspdk_event_iscsi.so.6.0 00:03:25.046 SO libspdk_event_vhost_scsi.so.3.0 00:03:25.046 SYMLINK libspdk_event_iscsi.so 00:03:25.046 SYMLINK libspdk_event_vhost_scsi.so 00:03:25.304 SO libspdk.so.6.0 00:03:25.304 SYMLINK libspdk.so 00:03:25.561 CC test/rpc_client/rpc_client_test.o 00:03:25.561 CXX app/trace/trace.o 00:03:25.561 TEST_HEADER include/spdk/accel.h 00:03:25.561 TEST_HEADER include/spdk/accel_module.h 00:03:25.561 TEST_HEADER include/spdk/assert.h 00:03:25.561 TEST_HEADER include/spdk/barrier.h 00:03:25.561 TEST_HEADER include/spdk/base64.h 00:03:25.561 TEST_HEADER include/spdk/bdev.h 00:03:25.561 TEST_HEADER include/spdk/bdev_module.h 00:03:25.561 TEST_HEADER include/spdk/bdev_zone.h 00:03:25.562 TEST_HEADER include/spdk/bit_array.h 00:03:25.562 TEST_HEADER include/spdk/bit_pool.h 00:03:25.562 TEST_HEADER include/spdk/blob_bdev.h 00:03:25.562 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:25.562 TEST_HEADER include/spdk/blobfs.h 00:03:25.562 TEST_HEADER include/spdk/blob.h 00:03:25.562 TEST_HEADER include/spdk/conf.h 00:03:25.562 TEST_HEADER include/spdk/config.h 00:03:25.562 TEST_HEADER include/spdk/cpuset.h 00:03:25.562 TEST_HEADER include/spdk/crc16.h 00:03:25.562 TEST_HEADER include/spdk/crc32.h 00:03:25.562 TEST_HEADER include/spdk/crc64.h 00:03:25.562 TEST_HEADER include/spdk/dif.h 00:03:25.562 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.562 TEST_HEADER include/spdk/dma.h 00:03:25.562 TEST_HEADER include/spdk/endian.h 00:03:25.562 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.562 TEST_HEADER include/spdk/env.h 00:03:25.562 TEST_HEADER include/spdk/event.h 00:03:25.562 CC examples/ioat/perf/perf.o 00:03:25.562 TEST_HEADER include/spdk/fd_group.h 00:03:25.562 TEST_HEADER include/spdk/fd.h 00:03:25.562 TEST_HEADER include/spdk/file.h 00:03:25.562 TEST_HEADER include/spdk/ftl.h 00:03:25.562 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.892 TEST_HEADER include/spdk/hexlify.h 00:03:25.892 TEST_HEADER include/spdk/histogram_data.h 00:03:25.892 TEST_HEADER include/spdk/idxd.h 00:03:25.892 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.892 CC test/thread/poller_perf/poller_perf.o 00:03:25.892 TEST_HEADER include/spdk/init.h 00:03:25.892 CC examples/util/zipf/zipf.o 00:03:25.892 TEST_HEADER include/spdk/ioat.h 00:03:25.892 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.892 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.892 TEST_HEADER include/spdk/json.h 00:03:25.892 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.892 TEST_HEADER include/spdk/keyring.h 00:03:25.892 TEST_HEADER include/spdk/keyring_module.h 00:03:25.892 TEST_HEADER include/spdk/likely.h 00:03:25.892 TEST_HEADER include/spdk/log.h 00:03:25.892 TEST_HEADER include/spdk/lvol.h 00:03:25.892 TEST_HEADER include/spdk/memory.h 00:03:25.892 TEST_HEADER include/spdk/mmio.h 00:03:25.892 TEST_HEADER include/spdk/nbd.h 00:03:25.892 TEST_HEADER include/spdk/notify.h 00:03:25.892 TEST_HEADER include/spdk/nvme.h 00:03:25.892 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.892 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.892 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.892 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.892 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.892 CC test/dma/test_dma/test_dma.o 00:03:25.892 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.892 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.892 TEST_HEADER include/spdk/nvmf.h 00:03:25.892 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.892 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.892 LINK rpc_client_test 00:03:25.892 TEST_HEADER include/spdk/opal.h 00:03:25.892 TEST_HEADER include/spdk/opal_spec.h 00:03:25.892 TEST_HEADER include/spdk/pci_ids.h 00:03:25.892 TEST_HEADER include/spdk/pipe.h 00:03:25.892 TEST_HEADER include/spdk/queue.h 00:03:25.892 TEST_HEADER include/spdk/reduce.h 00:03:25.892 CC test/app/bdev_svc/bdev_svc.o 00:03:25.892 TEST_HEADER include/spdk/rpc.h 00:03:25.892 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.892 TEST_HEADER include/spdk/scheduler.h 00:03:25.892 TEST_HEADER include/spdk/scsi.h 00:03:25.892 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.892 TEST_HEADER include/spdk/sock.h 00:03:25.892 TEST_HEADER include/spdk/stdinc.h 00:03:25.892 TEST_HEADER include/spdk/string.h 00:03:25.892 TEST_HEADER include/spdk/thread.h 00:03:25.892 TEST_HEADER include/spdk/trace.h 00:03:25.892 TEST_HEADER include/spdk/trace_parser.h 00:03:25.892 TEST_HEADER include/spdk/tree.h 00:03:25.892 TEST_HEADER include/spdk/ublk.h 00:03:25.892 TEST_HEADER include/spdk/util.h 00:03:25.892 TEST_HEADER include/spdk/uuid.h 00:03:25.892 TEST_HEADER include/spdk/version.h 00:03:25.892 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.892 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.892 TEST_HEADER include/spdk/vhost.h 00:03:25.892 TEST_HEADER include/spdk/vmd.h 00:03:25.892 TEST_HEADER include/spdk/xor.h 00:03:25.892 TEST_HEADER include/spdk/zipf.h 00:03:25.892 LINK poller_perf 00:03:25.892 CXX test/cpp_headers/accel.o 00:03:25.892 LINK ioat_perf 00:03:25.892 LINK interrupt_tgt 00:03:25.892 LINK zipf 00:03:26.149 LINK bdev_svc 00:03:26.149 LINK spdk_trace 00:03:26.149 CC test/app/histogram_perf/histogram_perf.o 00:03:26.149 LINK test_dma 00:03:26.149 CXX test/cpp_headers/accel_module.o 00:03:26.149 CC examples/ioat/verify/verify.o 00:03:26.406 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:26.406 LINK histogram_perf 00:03:26.406 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:26.406 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.406 CXX test/cpp_headers/assert.o 00:03:26.406 CC test/app/jsoncat/jsoncat.o 00:03:26.664 CC app/trace_record/trace_record.o 00:03:26.664 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.664 LINK verify 00:03:26.664 CXX test/cpp_headers/barrier.o 00:03:26.664 CC app/nvmf_tgt/nvmf_main.o 00:03:26.664 LINK jsoncat 00:03:26.664 LINK mem_callbacks 00:03:26.664 CC app/iscsi_tgt/iscsi_tgt.o 00:03:26.920 LINK spdk_trace_record 00:03:26.920 CXX test/cpp_headers/base64.o 00:03:26.920 LINK nvmf_tgt 00:03:26.920 LINK iscsi_tgt 00:03:27.177 LINK nvme_fuzz 00:03:27.177 CXX test/cpp_headers/bdev.o 00:03:27.177 CC test/env/vtophys/vtophys.o 00:03:27.177 CC examples/thread/thread/thread_ex.o 00:03:27.177 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.434 LINK vhost_fuzz 00:03:27.434 CXX test/cpp_headers/bdev_module.o 00:03:27.434 CC app/spdk_lspci/spdk_lspci.o 00:03:27.434 LINK vtophys 00:03:27.434 LINK env_dpdk_post_init 00:03:27.434 CC app/spdk_tgt/spdk_tgt.o 00:03:27.691 CXX test/cpp_headers/bdev_zone.o 00:03:27.691 CC app/spdk_nvme_perf/perf.o 00:03:27.691 LINK spdk_lspci 00:03:27.691 CC app/spdk_nvme_identify/identify.o 00:03:27.691 LINK thread 00:03:27.691 CC app/spdk_nvme_discover/discovery_aer.o 00:03:27.948 CC app/spdk_top/spdk_top.o 00:03:27.948 LINK spdk_tgt 00:03:27.948 CXX test/cpp_headers/bit_array.o 00:03:27.948 CC test/env/memory/memory_ut.o 00:03:28.204 LINK spdk_nvme_discover 00:03:28.204 CXX test/cpp_headers/bit_pool.o 00:03:28.204 CC examples/sock/hello_world/hello_sock.o 00:03:28.461 CC examples/vmd/lsvmd/lsvmd.o 00:03:28.461 CC test/env/pci/pci_ut.o 00:03:28.461 CXX test/cpp_headers/blob_bdev.o 00:03:28.461 LINK hello_sock 00:03:28.461 CC examples/idxd/perf/perf.o 00:03:28.719 LINK lsvmd 00:03:28.719 LINK iscsi_fuzz 00:03:28.719 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.719 LINK spdk_nvme_identify 00:03:28.975 LINK spdk_nvme_perf 00:03:28.975 CC examples/accel/perf/accel_perf.o 00:03:28.975 CXX test/cpp_headers/blobfs.o 00:03:28.975 CC examples/vmd/led/led.o 00:03:28.975 LINK idxd_perf 00:03:28.975 CC test/app/stub/stub.o 00:03:29.233 LINK pci_ut 00:03:29.233 LINK led 00:03:29.233 CXX test/cpp_headers/blob.o 00:03:29.233 LINK stub 00:03:29.233 CC examples/nvme/hello_world/hello_world.o 00:03:29.490 CC examples/nvme/reconnect/reconnect.o 00:03:29.490 CC examples/blob/hello_world/hello_blob.o 00:03:29.490 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:29.490 CC examples/nvme/arbitration/arbitration.o 00:03:29.490 LINK accel_perf 00:03:29.490 LINK hello_world 00:03:29.748 CXX test/cpp_headers/conf.o 00:03:29.748 LINK memory_ut 00:03:29.748 CC examples/nvme/hotplug/hotplug.o 00:03:29.748 LINK hello_blob 00:03:29.748 LINK spdk_top 00:03:29.748 CXX test/cpp_headers/config.o 00:03:29.748 CXX test/cpp_headers/cpuset.o 00:03:30.006 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.006 LINK reconnect 00:03:30.006 LINK arbitration 00:03:30.006 CXX test/cpp_headers/crc16.o 00:03:30.006 LINK hotplug 00:03:30.006 CC examples/nvme/abort/abort.o 00:03:30.006 LINK cmb_copy 00:03:30.265 CC examples/blob/cli/blobcli.o 00:03:30.265 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.265 CC app/vhost/vhost.o 00:03:30.265 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.265 CXX test/cpp_headers/crc32.o 00:03:30.523 CC test/event/reactor/reactor.o 00:03:30.523 CC test/event/event_perf/event_perf.o 00:03:30.523 LINK hello_bdev 00:03:30.523 LINK pmr_persistence 00:03:30.523 LINK nvme_manage 00:03:30.523 CC test/event/reactor_perf/reactor_perf.o 00:03:30.523 LINK vhost 00:03:30.523 LINK reactor 00:03:30.523 LINK event_perf 00:03:30.523 LINK abort 00:03:30.524 CXX test/cpp_headers/crc64.o 00:03:30.780 CXX test/cpp_headers/dif.o 00:03:30.780 CXX test/cpp_headers/dma.o 00:03:30.780 CXX test/cpp_headers/endian.o 00:03:30.780 LINK reactor_perf 00:03:30.780 CC examples/bdev/bdevperf/bdevperf.o 00:03:30.780 CXX test/cpp_headers/env_dpdk.o 00:03:31.036 CC app/spdk_dd/spdk_dd.o 00:03:31.036 CXX test/cpp_headers/env.o 00:03:31.036 CC test/nvme/aer/aer.o 00:03:31.036 CC test/event/app_repeat/app_repeat.o 00:03:31.036 CC test/accel/dif/dif.o 00:03:31.036 LINK blobcli 00:03:31.292 CC test/blobfs/mkfs/mkfs.o 00:03:31.292 CC test/event/scheduler/scheduler.o 00:03:31.292 CXX test/cpp_headers/event.o 00:03:31.292 CC test/lvol/esnap/esnap.o 00:03:31.292 LINK app_repeat 00:03:31.292 LINK spdk_dd 00:03:31.548 LINK mkfs 00:03:31.548 LINK aer 00:03:31.548 CXX test/cpp_headers/fd_group.o 00:03:31.548 CXX test/cpp_headers/fd.o 00:03:31.548 LINK scheduler 00:03:31.548 CC app/fio/nvme/fio_plugin.o 00:03:31.804 LINK dif 00:03:31.804 CXX test/cpp_headers/file.o 00:03:31.804 CC test/nvme/reset/reset.o 00:03:31.804 CC app/fio/bdev/fio_plugin.o 00:03:31.804 CC test/nvme/sgl/sgl.o 00:03:31.804 CC test/nvme/e2edp/nvme_dp.o 00:03:31.804 LINK bdevperf 00:03:31.804 CXX test/cpp_headers/ftl.o 00:03:32.069 CXX test/cpp_headers/gpt_spec.o 00:03:32.069 CXX test/cpp_headers/hexlify.o 00:03:32.069 LINK reset 00:03:32.069 CXX test/cpp_headers/histogram_data.o 00:03:32.069 CXX test/cpp_headers/idxd.o 00:03:32.069 LINK nvme_dp 00:03:32.069 CXX test/cpp_headers/idxd_spec.o 00:03:32.069 LINK sgl 00:03:32.326 CC examples/nvmf/nvmf/nvmf.o 00:03:32.326 LINK spdk_nvme 00:03:32.326 CXX test/cpp_headers/init.o 00:03:32.326 LINK spdk_bdev 00:03:32.326 CC test/nvme/overhead/overhead.o 00:03:32.326 CC test/nvme/err_injection/err_injection.o 00:03:32.326 CC test/nvme/startup/startup.o 00:03:32.326 CXX test/cpp_headers/ioat.o 00:03:32.326 CC test/nvme/reserve/reserve.o 00:03:32.586 CXX test/cpp_headers/ioat_spec.o 00:03:32.586 CXX test/cpp_headers/iscsi_spec.o 00:03:32.586 CC test/bdev/bdevio/bdevio.o 00:03:32.586 CXX test/cpp_headers/json.o 00:03:32.586 LINK err_injection 00:03:32.586 LINK startup 00:03:32.586 LINK nvmf 00:03:32.586 LINK reserve 00:03:32.844 LINK overhead 00:03:32.844 CXX test/cpp_headers/jsonrpc.o 00:03:32.844 CXX test/cpp_headers/keyring.o 00:03:32.844 CC test/nvme/simple_copy/simple_copy.o 00:03:32.844 CXX test/cpp_headers/keyring_module.o 00:03:32.844 CC test/nvme/connect_stress/connect_stress.o 00:03:32.844 CC test/nvme/boot_partition/boot_partition.o 00:03:32.844 CXX test/cpp_headers/likely.o 00:03:33.102 CC test/nvme/compliance/nvme_compliance.o 00:03:33.102 LINK bdevio 00:03:33.102 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.102 LINK simple_copy 00:03:33.102 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.102 LINK boot_partition 00:03:33.102 CXX test/cpp_headers/log.o 00:03:33.102 CXX test/cpp_headers/lvol.o 00:03:33.102 LINK connect_stress 00:03:33.359 CXX test/cpp_headers/memory.o 00:03:33.359 LINK fused_ordering 00:03:33.359 CXX test/cpp_headers/mmio.o 00:03:33.359 LINK doorbell_aers 00:03:33.359 CXX test/cpp_headers/nbd.o 00:03:33.359 CXX test/cpp_headers/notify.o 00:03:33.359 CXX test/cpp_headers/nvme.o 00:03:33.359 CC test/nvme/fdp/fdp.o 00:03:33.359 LINK nvme_compliance 00:03:33.359 CC test/nvme/cuse/cuse.o 00:03:33.359 CXX test/cpp_headers/nvme_intel.o 00:03:33.616 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.616 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.616 CXX test/cpp_headers/nvme_spec.o 00:03:33.616 CXX test/cpp_headers/nvme_zns.o 00:03:33.616 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.616 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.616 CXX test/cpp_headers/nvmf.o 00:03:33.872 CXX test/cpp_headers/nvmf_spec.o 00:03:33.872 CXX test/cpp_headers/nvmf_transport.o 00:03:33.872 CXX test/cpp_headers/opal.o 00:03:33.872 LINK fdp 00:03:33.872 CXX test/cpp_headers/opal_spec.o 00:03:33.872 CXX test/cpp_headers/pci_ids.o 00:03:33.872 CXX test/cpp_headers/pipe.o 00:03:33.872 CXX test/cpp_headers/queue.o 00:03:33.872 CXX test/cpp_headers/reduce.o 00:03:33.872 CXX test/cpp_headers/rpc.o 00:03:33.872 CXX test/cpp_headers/scheduler.o 00:03:33.872 CXX test/cpp_headers/scsi.o 00:03:34.130 CXX test/cpp_headers/scsi_spec.o 00:03:34.130 CXX test/cpp_headers/sock.o 00:03:34.130 CXX test/cpp_headers/stdinc.o 00:03:34.130 CXX test/cpp_headers/string.o 00:03:34.130 CXX test/cpp_headers/thread.o 00:03:34.130 CXX test/cpp_headers/trace.o 00:03:34.130 CXX test/cpp_headers/trace_parser.o 00:03:34.130 CXX test/cpp_headers/tree.o 00:03:34.130 CXX test/cpp_headers/ublk.o 00:03:34.130 CXX test/cpp_headers/util.o 00:03:34.130 CXX test/cpp_headers/uuid.o 00:03:34.130 CXX test/cpp_headers/version.o 00:03:34.130 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.130 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.387 CXX test/cpp_headers/vhost.o 00:03:34.387 CXX test/cpp_headers/vmd.o 00:03:34.387 CXX test/cpp_headers/xor.o 00:03:34.387 CXX test/cpp_headers/zipf.o 00:03:34.953 LINK cuse 00:03:38.259 LINK esnap 00:03:38.521 00:03:38.521 real 1m22.713s 00:03:38.521 user 7m50.047s 00:03:38.521 sys 2m2.269s 00:03:38.521 19:25:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:38.521 19:25:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:38.521 ************************************ 00:03:38.521 END TEST make 00:03:38.521 ************************************ 00:03:38.521 19:25:29 -- common/autotest_common.sh@1142 -- $ return 0 00:03:38.521 19:25:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:38.521 19:25:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:38.521 19:25:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:38.521 19:25:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.521 19:25:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:38.521 19:25:29 -- pm/common@44 -- $ pid=5236 00:03:38.521 19:25:29 -- pm/common@50 -- $ kill -TERM 5236 00:03:38.521 19:25:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.521 19:25:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:38.521 19:25:29 -- pm/common@44 -- $ pid=5237 00:03:38.521 19:25:29 -- pm/common@50 -- $ kill -TERM 5237 00:03:38.521 19:25:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:38.521 19:25:29 -- nvmf/common.sh@7 -- # uname -s 00:03:38.521 19:25:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:38.521 19:25:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:38.521 19:25:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:38.521 19:25:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:38.521 19:25:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:38.521 19:25:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:38.521 19:25:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:38.521 19:25:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:38.521 19:25:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:38.521 19:25:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.521 19:25:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85d4478b-635a-462e-8237-2d2157ba9cca 00:03:38.521 19:25:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=85d4478b-635a-462e-8237-2d2157ba9cca 00:03:38.521 19:25:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.521 19:25:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.521 19:25:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:38.521 19:25:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:38.521 19:25:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.521 19:25:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.521 19:25:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.521 19:25:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.521 19:25:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.521 19:25:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.521 19:25:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.521 19:25:29 -- paths/export.sh@5 -- # export PATH 00:03:38.521 19:25:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.521 19:25:29 -- nvmf/common.sh@47 -- # : 0 00:03:38.521 19:25:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:38.521 19:25:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:38.521 19:25:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:38.521 19:25:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.521 19:25:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.521 19:25:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:38.521 19:25:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:38.521 19:25:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:38.521 19:25:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:38.521 19:25:29 -- spdk/autotest.sh@32 -- # uname -s 00:03:38.521 19:25:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:38.521 19:25:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:38.521 19:25:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.521 19:25:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:38.521 19:25:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.521 19:25:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:38.778 19:25:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:38.778 19:25:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:38.778 19:25:29 -- spdk/autotest.sh@48 -- # udevadm_pid=53838 00:03:38.778 19:25:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:38.778 19:25:29 -- pm/common@17 -- # local monitor 00:03:38.778 19:25:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.778 19:25:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.778 19:25:29 -- pm/common@25 -- # sleep 1 00:03:38.778 19:25:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:38.778 19:25:29 -- pm/common@21 -- # date +%s 00:03:38.778 19:25:29 -- pm/common@21 -- # date +%s 00:03:38.778 19:25:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721071529 00:03:38.778 19:25:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721071529 00:03:38.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721071529_collect-cpu-load.pm.log 00:03:38.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721071529_collect-vmstat.pm.log 00:03:39.711 19:25:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.711 19:25:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:39.711 19:25:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:39.711 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.711 19:25:30 -- spdk/autotest.sh@59 -- # create_test_list 00:03:39.711 19:25:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:39.711 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.711 19:25:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:39.711 19:25:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:39.711 19:25:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:39.711 19:25:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:39.711 19:25:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:39.711 19:25:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:39.711 19:25:30 -- common/autotest_common.sh@1455 -- # uname 00:03:39.711 19:25:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:39.711 19:25:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:39.711 19:25:30 -- common/autotest_common.sh@1475 -- # uname 00:03:39.711 19:25:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:39.711 19:25:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:39.711 19:25:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:39.711 19:25:30 -- spdk/autotest.sh@72 -- # hash lcov 00:03:39.711 19:25:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:39.711 19:25:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:39.711 --rc lcov_branch_coverage=1 00:03:39.711 --rc lcov_function_coverage=1 00:03:39.711 --rc genhtml_branch_coverage=1 00:03:39.711 --rc genhtml_function_coverage=1 00:03:39.711 --rc genhtml_legend=1 00:03:39.711 --rc geninfo_all_blocks=1 00:03:39.711 ' 00:03:39.711 19:25:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:39.711 --rc lcov_branch_coverage=1 00:03:39.712 --rc lcov_function_coverage=1 00:03:39.712 --rc genhtml_branch_coverage=1 00:03:39.712 --rc genhtml_function_coverage=1 00:03:39.712 --rc genhtml_legend=1 00:03:39.712 --rc geninfo_all_blocks=1 00:03:39.712 ' 00:03:39.712 19:25:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:39.712 --rc lcov_branch_coverage=1 00:03:39.712 --rc lcov_function_coverage=1 00:03:39.712 --rc genhtml_branch_coverage=1 00:03:39.712 --rc genhtml_function_coverage=1 00:03:39.712 --rc genhtml_legend=1 00:03:39.712 --rc geninfo_all_blocks=1 00:03:39.712 --no-external' 00:03:39.712 19:25:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:39.712 --rc lcov_branch_coverage=1 00:03:39.712 --rc lcov_function_coverage=1 00:03:39.712 --rc genhtml_branch_coverage=1 00:03:39.712 --rc genhtml_function_coverage=1 00:03:39.712 --rc genhtml_legend=1 00:03:39.712 --rc geninfo_all_blocks=1 00:03:39.712 --no-external' 00:03:39.712 19:25:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:39.970 lcov: LCOV version 1.14 00:03:39.970 19:25:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:54.927 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.927 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:07.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:07.125 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:07.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:07.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:10.404 19:26:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:10.404 19:26:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.404 19:26:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.404 19:26:01 -- spdk/autotest.sh@91 -- # rm -f 00:04:10.404 19:26:01 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.537 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:11.537 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:11.537 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:11.537 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:11.796 19:26:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:11.796 19:26:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:11.796 19:26:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:11.796 19:26:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.796 19:26:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:11.796 19:26:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:11.796 19:26:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.796 19:26:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:11.796 19:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.796 19:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:11.796 19:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:11.796 19:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:11.796 19:26:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:11.796 No valid GPT data, bailing 00:04:11.796 19:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.796 19:26:02 -- scripts/common.sh@391 -- # pt= 00:04:11.796 19:26:02 -- scripts/common.sh@392 -- # return 1 00:04:11.796 19:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:11.796 1+0 records in 00:04:11.796 1+0 records out 00:04:11.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125772 s, 83.4 MB/s 00:04:11.796 19:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.796 19:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:11.796 19:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:11.796 19:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:11.796 19:26:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:11.796 No valid GPT data, bailing 00:04:11.796 19:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:11.796 19:26:02 -- scripts/common.sh@391 -- # pt= 00:04:11.796 19:26:02 -- scripts/common.sh@392 -- # return 1 00:04:11.796 19:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:11.796 1+0 records in 00:04:11.796 1+0 records out 00:04:11.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556 s, 189 MB/s 00:04:11.796 19:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.796 19:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:11.796 19:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:11.796 19:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:11.796 19:26:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:11.796 No valid GPT data, bailing 00:04:11.796 19:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.053 19:26:02 -- scripts/common.sh@391 -- # pt= 00:04:12.054 19:26:02 -- scripts/common.sh@392 -- # return 1 00:04:12.054 19:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.054 1+0 records in 00:04:12.054 1+0 records out 00:04:12.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475739 s, 220 MB/s 00:04:12.054 19:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.054 19:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.054 19:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:12.054 19:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:12.054 19:26:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:12.054 No valid GPT data, bailing 00:04:12.054 19:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:12.054 19:26:02 -- scripts/common.sh@391 -- # pt= 00:04:12.054 19:26:02 -- scripts/common.sh@392 -- # return 1 00:04:12.054 19:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:12.054 1+0 records in 00:04:12.054 1+0 records out 00:04:12.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364153 s, 288 MB/s 00:04:12.054 19:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.054 19:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.054 19:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:12.054 19:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:12.054 19:26:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:12.054 No valid GPT data, bailing 00:04:12.054 19:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:12.054 19:26:02 -- scripts/common.sh@391 -- # pt= 00:04:12.054 19:26:02 -- scripts/common.sh@392 -- # return 1 00:04:12.054 19:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:12.054 1+0 records in 00:04:12.054 1+0 records out 00:04:12.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385175 s, 272 MB/s 00:04:12.054 19:26:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.054 19:26:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.054 19:26:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:12.054 19:26:02 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:12.054 19:26:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.054 No valid GPT data, bailing 00:04:12.054 19:26:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.054 19:26:02 -- scripts/common.sh@391 -- # pt= 00:04:12.054 19:26:02 -- scripts/common.sh@392 -- # return 1 00:04:12.054 19:26:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.054 1+0 records in 00:04:12.054 1+0 records out 00:04:12.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045176 s, 232 MB/s 00:04:12.054 19:26:02 -- spdk/autotest.sh@118 -- # sync 00:04:12.311 19:26:02 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.311 19:26:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.311 19:26:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.216 19:26:04 -- spdk/autotest.sh@124 -- # uname -s 00:04:14.216 19:26:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:14.216 19:26:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.216 19:26:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.216 19:26:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.216 19:26:04 -- common/autotest_common.sh@10 -- # set +x 00:04:14.216 ************************************ 00:04:14.216 START TEST setup.sh 00:04:14.216 ************************************ 00:04:14.216 19:26:04 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.475 * Looking for test storage... 00:04:14.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.475 19:26:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:14.475 19:26:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:14.475 19:26:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:14.475 19:26:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.475 19:26:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.475 19:26:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.475 ************************************ 00:04:14.475 START TEST acl 00:04:14.475 ************************************ 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:14.475 * Looking for test storage... 00:04:14.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.475 19:26:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.475 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:14.476 19:26:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.476 19:26:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:14.476 19:26:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:14.476 19:26:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:14.476 19:26:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:14.476 19:26:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:14.476 19:26:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.476 19:26:05 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.851 19:26:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:15.851 19:26:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:15.851 19:26:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.851 19:26:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:15.851 19:26:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.851 19:26:06 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:16.417 19:26:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:16.417 19:26:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.417 19:26:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.676 Hugepages 00:04:16.676 node hugesize free / total 00:04:16.676 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:16.676 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.676 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.676 00:04:16.676 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.676 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:16.676 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.676 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.934 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:16.934 19:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:16.934 19:26:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.934 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.934 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:16.934 19:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.935 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:17.193 19:26:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:17.193 19:26:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.193 19:26:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.193 19:26:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:17.193 ************************************ 00:04:17.193 START TEST denied 00:04:17.193 ************************************ 00:04:17.193 19:26:07 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:17.193 19:26:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:17.193 19:26:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:17.193 19:26:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:17.193 19:26:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.193 19:26:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.567 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.567 19:26:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.153 00:04:25.153 real 0m7.498s 00:04:25.153 user 0m0.893s 00:04:25.153 sys 0m1.665s 00:04:25.153 19:26:15 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.153 ************************************ 00:04:25.153 END TEST denied 00:04:25.153 ************************************ 00:04:25.153 19:26:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:25.153 19:26:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:25.153 19:26:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:25.153 19:26:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.153 19:26:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.153 19:26:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.153 ************************************ 00:04:25.153 START TEST allowed 00:04:25.153 ************************************ 00:04:25.153 19:26:15 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:25.153 19:26:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:25.153 19:26:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:25.153 19:26:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:25.153 19:26:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.153 19:26:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.110 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.110 19:26:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.485 00:04:27.485 real 0m2.439s 00:04:27.485 user 0m0.999s 00:04:27.485 sys 0m1.461s 00:04:27.485 19:26:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.485 19:26:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:27.485 ************************************ 00:04:27.485 END TEST allowed 00:04:27.485 ************************************ 00:04:27.485 19:26:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:27.485 00:04:27.485 real 0m12.879s 00:04:27.485 user 0m3.183s 00:04:27.485 sys 0m4.802s 00:04:27.485 19:26:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.485 19:26:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.485 ************************************ 00:04:27.485 END TEST acl 00:04:27.485 ************************************ 00:04:27.485 19:26:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:27.485 19:26:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:27.485 19:26:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.485 19:26:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.485 19:26:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.485 ************************************ 00:04:27.485 START TEST hugepages 00:04:27.485 ************************************ 00:04:27.485 19:26:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:27.485 * Looking for test storage... 00:04:27.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5814176 kB' 'MemAvailable: 7408356 kB' 'Buffers: 2436 kB' 'Cached: 1807336 kB' 'SwapCached: 0 kB' 'Active: 449904 kB' 'Inactive: 1463600 kB' 'Active(anon): 114244 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 105412 kB' 'Mapped: 51440 kB' 'Shmem: 10512 kB' 'KReclaimable: 63716 kB' 'Slab: 142240 kB' 'SReclaimable: 63716 kB' 'SUnreclaim: 78524 kB' 'KernelStack: 6396 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 328080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.485 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.486 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.487 19:26:18 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:27.487 19:26:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.487 19:26:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.487 19:26:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.487 ************************************ 00:04:27.487 START TEST default_setup 00:04:27.487 ************************************ 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.487 19:26:18 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.615 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.877 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.877 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.877 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908508 kB' 'MemAvailable: 9502436 kB' 'Buffers: 2436 kB' 'Cached: 1807312 kB' 'SwapCached: 0 kB' 'Active: 466108 kB' 'Inactive: 1463608 kB' 'Active(anon): 130448 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121632 kB' 'Mapped: 51560 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141704 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78508 kB' 'KernelStack: 6384 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.877 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908284 kB' 'MemAvailable: 9502212 kB' 'Buffers: 2436 kB' 'Cached: 1807312 kB' 'SwapCached: 0 kB' 'Active: 465828 kB' 'Inactive: 1463608 kB' 'Active(anon): 130168 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463608 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121320 kB' 'Mapped: 51620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141704 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78508 kB' 'KernelStack: 6352 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.878 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.879 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908284 kB' 'MemAvailable: 9502216 kB' 'Buffers: 2436 kB' 'Cached: 1807312 kB' 'SwapCached: 0 kB' 'Active: 465644 kB' 'Inactive: 1463612 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 121144 kB' 'Mapped: 51516 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141704 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78508 kB' 'KernelStack: 6384 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.880 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:28.881 nr_hugepages=1024 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.881 resv_hugepages=0 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.881 surplus_hugepages=0 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.881 anon_hugepages=0 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.881 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908284 kB' 'MemAvailable: 9502216 kB' 'Buffers: 2436 kB' 'Cached: 1807312 kB' 'SwapCached: 0 kB' 'Active: 465600 kB' 'Inactive: 1463612 kB' 'Active(anon): 129940 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 121100 kB' 'Mapped: 51516 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141704 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78508 kB' 'KernelStack: 6368 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.143 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908284 kB' 'MemUsed: 4333696 kB' 'SwapCached: 0 kB' 'Active: 465616 kB' 'Inactive: 1463612 kB' 'Active(anon): 129956 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1809748 kB' 'Mapped: 51516 kB' 'AnonPages: 121112 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63196 kB' 'Slab: 141704 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.144 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.145 node0=1024 expecting 1024 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.145 ************************************ 00:04:29.145 END TEST default_setup 00:04:29.145 ************************************ 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.145 00:04:29.145 real 0m1.569s 00:04:29.145 user 0m0.646s 00:04:29.145 sys 0m0.923s 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.145 19:26:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:29.145 19:26:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.145 19:26:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:29.145 19:26:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.145 19:26:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.145 19:26:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.145 ************************************ 00:04:29.145 START TEST per_node_1G_alloc 00:04:29.145 ************************************ 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:29.145 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:29.146 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:29.146 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.146 19:26:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.712 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.712 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.712 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.712 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.712 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961100 kB' 'MemAvailable: 10555040 kB' 'Buffers: 2436 kB' 'Cached: 1807320 kB' 'SwapCached: 0 kB' 'Active: 466168 kB' 'Inactive: 1463620 kB' 'Active(anon): 130508 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 121688 kB' 'Mapped: 51584 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141756 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78560 kB' 'KernelStack: 6380 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.713 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961572 kB' 'MemAvailable: 10555520 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465788 kB' 'Inactive: 1463628 kB' 'Active(anon): 130128 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 121256 kB' 'Mapped: 51456 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141720 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78524 kB' 'KernelStack: 6352 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.714 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.715 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.976 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961572 kB' 'MemAvailable: 10555520 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465768 kB' 'Inactive: 1463628 kB' 'Active(anon): 130108 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 144 kB' 'Writeback: 0 kB' 'AnonPages: 121252 kB' 'Mapped: 51456 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141720 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78524 kB' 'KernelStack: 6352 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.977 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.978 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.979 nr_hugepages=512 00:04:29.979 resv_hugepages=0 00:04:29.979 surplus_hugepages=0 00:04:29.979 anon_hugepages=0 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961572 kB' 'MemAvailable: 10555520 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 466104 kB' 'Inactive: 1463628 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 144 kB' 'Writeback: 0 kB' 'AnonPages: 121672 kB' 'Mapped: 51456 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141720 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78524 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.979 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.980 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961572 kB' 'MemUsed: 3280408 kB' 'SwapCached: 0 kB' 'Active: 465960 kB' 'Inactive: 1463628 kB' 'Active(anon): 130300 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 144 kB' 'Writeback: 0 kB' 'FilePages: 1809760 kB' 'Mapped: 51456 kB' 'AnonPages: 121400 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63196 kB' 'Slab: 141708 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.981 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.982 node0=512 expecting 512 00:04:29.982 ************************************ 00:04:29.982 END TEST per_node_1G_alloc 00:04:29.982 ************************************ 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.982 00:04:29.982 real 0m0.847s 00:04:29.982 user 0m0.360s 00:04:29.982 sys 0m0.520s 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.982 19:26:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.982 19:26:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.982 19:26:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:29.982 19:26:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.982 19:26:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.982 19:26:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.982 ************************************ 00:04:29.982 START TEST even_2G_alloc 00:04:29.982 ************************************ 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.982 19:26:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.549 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.549 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.549 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.549 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7909952 kB' 'MemAvailable: 9503900 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 466488 kB' 'Inactive: 1463628 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121900 kB' 'Mapped: 51568 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141708 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78512 kB' 'KernelStack: 6376 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.549 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.550 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.813 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7909952 kB' 'MemAvailable: 9503900 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465784 kB' 'Inactive: 1463628 kB' 'Active(anon): 130124 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121276 kB' 'Mapped: 51456 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141752 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78556 kB' 'KernelStack: 6352 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.814 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.815 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7909952 kB' 'MemAvailable: 9503900 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465564 kB' 'Inactive: 1463628 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121008 kB' 'Mapped: 51456 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141752 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78556 kB' 'KernelStack: 6352 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.816 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.817 nr_hugepages=1024 00:04:30.817 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.817 resv_hugepages=0 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.818 surplus_hugepages=0 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.818 anon_hugepages=0 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7909952 kB' 'MemAvailable: 9503900 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465860 kB' 'Inactive: 1463628 kB' 'Active(anon): 130200 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121312 kB' 'Mapped: 51456 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141748 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78552 kB' 'KernelStack: 6368 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.819 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7909952 kB' 'MemUsed: 4332028 kB' 'SwapCached: 0 kB' 'Active: 465532 kB' 'Inactive: 1463628 kB' 'Active(anon): 129872 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1809760 kB' 'Mapped: 51456 kB' 'AnonPages: 120964 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63196 kB' 'Slab: 141744 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.820 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.821 node0=1024 expecting 1024 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.821 00:04:30.821 real 0m0.820s 00:04:30.821 user 0m0.372s 00:04:30.821 sys 0m0.487s 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.821 19:26:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.821 ************************************ 00:04:30.821 END TEST even_2G_alloc 00:04:30.821 ************************************ 00:04:30.821 19:26:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.821 19:26:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:30.821 19:26:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.821 19:26:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.821 19:26:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.821 ************************************ 00:04:30.821 START TEST odd_alloc 00:04:30.821 ************************************ 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.821 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.822 19:26:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.389 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.389 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.389 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.389 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.389 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.652 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.652 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.652 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.652 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7907736 kB' 'MemAvailable: 9501684 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 466012 kB' 'Inactive: 1463628 kB' 'Active(anon): 130352 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121232 kB' 'Mapped: 51424 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141820 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78624 kB' 'KernelStack: 6316 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.653 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7907232 kB' 'MemAvailable: 9501180 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465852 kB' 'Inactive: 1463628 kB' 'Active(anon): 130192 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121304 kB' 'Mapped: 51388 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141728 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78532 kB' 'KernelStack: 6368 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.654 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.655 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7907232 kB' 'MemAvailable: 9501180 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465848 kB' 'Inactive: 1463628 kB' 'Active(anon): 130188 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121320 kB' 'Mapped: 51388 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141712 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78516 kB' 'KernelStack: 6352 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.656 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:31.657 nr_hugepages=1025 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.657 resv_hugepages=0 00:04:31.657 surplus_hugepages=0 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.657 anon_hugepages=0 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.657 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7907232 kB' 'MemAvailable: 9501180 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 465616 kB' 'Inactive: 1463628 kB' 'Active(anon): 129956 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 51388 kB' 'Shmem: 10472 kB' 'KReclaimable: 63196 kB' 'Slab: 141712 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78516 kB' 'KernelStack: 6320 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7907232 kB' 'MemUsed: 4334748 kB' 'SwapCached: 0 kB' 'Active: 465836 kB' 'Inactive: 1463628 kB' 'Active(anon): 130176 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1809760 kB' 'Mapped: 51388 kB' 'AnonPages: 121292 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63196 kB' 'Slab: 141712 kB' 'SReclaimable: 63196 kB' 'SUnreclaim: 78516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.660 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.661 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:31.661 node0=1025 expecting 1025 00:04:31.661 19:26:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:31.661 00:04:31.661 real 0m0.783s 00:04:31.661 user 0m0.343s 00:04:31.661 sys 0m0.473s 00:04:31.661 19:26:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.661 19:26:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.661 ************************************ 00:04:31.661 END TEST odd_alloc 00:04:31.661 ************************************ 00:04:31.661 19:26:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:31.661 19:26:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:31.661 19:26:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.661 19:26:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.661 19:26:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.661 ************************************ 00:04:31.661 START TEST custom_alloc 00:04:31.661 ************************************ 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.661 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.228 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.228 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.228 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.228 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.228 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8964528 kB' 'MemAvailable: 10558472 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 463456 kB' 'Inactive: 1463628 kB' 'Active(anon): 127796 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118860 kB' 'Mapped: 50820 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141528 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78340 kB' 'KernelStack: 6280 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.229 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8964788 kB' 'MemAvailable: 10558732 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 463140 kB' 'Inactive: 1463628 kB' 'Active(anon): 127480 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118844 kB' 'Mapped: 50820 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141520 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78332 kB' 'KernelStack: 6312 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.230 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.493 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.494 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8964656 kB' 'MemAvailable: 10558600 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 462892 kB' 'Inactive: 1463628 kB' 'Active(anon): 127232 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118596 kB' 'Mapped: 50716 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141516 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78328 kB' 'KernelStack: 6256 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.495 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.496 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.497 nr_hugepages=512 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:32.497 resv_hugepages=0 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.497 surplus_hugepages=0 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.497 anon_hugepages=0 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8964656 kB' 'MemAvailable: 10558600 kB' 'Buffers: 2436 kB' 'Cached: 1807324 kB' 'SwapCached: 0 kB' 'Active: 462832 kB' 'Inactive: 1463628 kB' 'Active(anon): 127172 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118592 kB' 'Mapped: 50716 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141516 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78328 kB' 'KernelStack: 6272 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.498 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8964656 kB' 'MemUsed: 3277324 kB' 'SwapCached: 0 kB' 'Active: 462896 kB' 'Inactive: 1463628 kB' 'Active(anon): 127236 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1809760 kB' 'Mapped: 50716 kB' 'AnonPages: 118616 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63188 kB' 'Slab: 141512 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.500 node0=512 expecting 512 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:32.500 00:04:32.500 real 0m0.742s 00:04:32.500 user 0m0.313s 00:04:32.500 sys 0m0.482s 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.500 19:26:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.500 ************************************ 00:04:32.500 END TEST custom_alloc 00:04:32.500 ************************************ 00:04:32.500 19:26:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.500 19:26:23 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:32.500 19:26:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.500 19:26:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.500 19:26:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.500 ************************************ 00:04:32.500 START TEST no_shrink_alloc 00:04:32.500 ************************************ 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.500 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.069 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.069 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.069 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.069 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921580 kB' 'MemAvailable: 9515532 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 463316 kB' 'Inactive: 1463636 kB' 'Active(anon): 127656 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118760 kB' 'Mapped: 50960 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141564 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78376 kB' 'KernelStack: 6228 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.069 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.070 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921864 kB' 'MemAvailable: 9515816 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 463108 kB' 'Inactive: 1463636 kB' 'Active(anon): 127448 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118584 kB' 'Mapped: 50956 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141564 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78376 kB' 'KernelStack: 6228 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.071 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.072 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921864 kB' 'MemAvailable: 9515816 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 463160 kB' 'Inactive: 1463636 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118648 kB' 'Mapped: 50764 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141564 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78376 kB' 'KernelStack: 6288 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.073 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.335 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.336 nr_hugepages=1024 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.336 resv_hugepages=0 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.336 surplus_hugepages=0 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.336 anon_hugepages=0 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.336 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921864 kB' 'MemAvailable: 9515816 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 463260 kB' 'Inactive: 1463636 kB' 'Active(anon): 127600 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 50764 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141564 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78376 kB' 'KernelStack: 6272 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.337 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921864 kB' 'MemUsed: 4320116 kB' 'SwapCached: 0 kB' 'Active: 463164 kB' 'Inactive: 1463636 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1809768 kB' 'Mapped: 50764 kB' 'AnonPages: 118648 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63188 kB' 'Slab: 141564 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.339 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.340 node0=1024 expecting 1024 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.340 19:26:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.923 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.923 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.923 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.923 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.923 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.923 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7919780 kB' 'MemAvailable: 9513732 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 463708 kB' 'Inactive: 1463636 kB' 'Active(anon): 128048 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 119180 kB' 'Mapped: 50988 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141560 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78372 kB' 'KernelStack: 6260 kB' 'PageTables: 3476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.924 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7920436 kB' 'MemAvailable: 9514388 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 462964 kB' 'Inactive: 1463636 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118404 kB' 'Mapped: 50840 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141560 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78372 kB' 'KernelStack: 6264 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.925 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.926 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7920436 kB' 'MemAvailable: 9514388 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 462920 kB' 'Inactive: 1463636 kB' 'Active(anon): 127260 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118360 kB' 'Mapped: 50840 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141560 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78372 kB' 'KernelStack: 6248 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.927 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.928 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.929 nr_hugepages=1024 00:04:33.929 resv_hugepages=0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.929 surplus_hugepages=0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.929 anon_hugepages=0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7920436 kB' 'MemAvailable: 9514388 kB' 'Buffers: 2436 kB' 'Cached: 1807332 kB' 'SwapCached: 0 kB' 'Active: 462856 kB' 'Inactive: 1463636 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118556 kB' 'Mapped: 50764 kB' 'Shmem: 10472 kB' 'KReclaimable: 63188 kB' 'Slab: 141560 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78372 kB' 'KernelStack: 6272 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.929 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.930 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7920436 kB' 'MemUsed: 4321544 kB' 'SwapCached: 0 kB' 'Active: 463140 kB' 'Inactive: 1463628 kB' 'Active(anon): 127480 kB' 'Inactive(anon): 0 kB' 'Active(file): 335660 kB' 'Inactive(file): 1463628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1809760 kB' 'Mapped: 50716 kB' 'AnonPages: 118612 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63188 kB' 'Slab: 141560 kB' 'SReclaimable: 63188 kB' 'SUnreclaim: 78372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.931 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.932 node0=1024 expecting 1024 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.932 00:04:33.932 real 0m1.512s 00:04:33.932 user 0m0.682s 00:04:33.932 sys 0m0.934s 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.932 19:26:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.932 ************************************ 00:04:33.932 END TEST no_shrink_alloc 00:04:33.932 ************************************ 00:04:34.228 19:26:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.228 19:26:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.228 00:04:34.228 real 0m6.779s 00:04:34.228 user 0m2.917s 00:04:34.228 sys 0m4.132s 00:04:34.228 19:26:24 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.228 ************************************ 00:04:34.228 END TEST hugepages 00:04:34.228 19:26:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.228 ************************************ 00:04:34.228 19:26:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:34.228 19:26:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:34.228 19:26:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.228 19:26:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.228 19:26:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.228 ************************************ 00:04:34.228 START TEST driver 00:04:34.228 ************************************ 00:04:34.228 19:26:24 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:34.228 * Looking for test storage... 00:04:34.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.228 19:26:24 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:34.228 19:26:24 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.228 19:26:24 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.791 19:26:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:40.791 19:26:30 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.791 19:26:30 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.791 19:26:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 ************************************ 00:04:40.791 START TEST guess_driver 00:04:40.791 ************************************ 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:40.791 19:26:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:40.791 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:40.791 Looking for driver=uio_pci_generic 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:40.791 19:26:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.725 19:26:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.356 00:04:48.356 real 0m7.532s 00:04:48.356 user 0m0.820s 00:04:48.356 sys 0m1.833s 00:04:48.356 19:26:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.356 19:26:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:48.356 ************************************ 00:04:48.356 END TEST guess_driver 00:04:48.356 ************************************ 00:04:48.356 19:26:38 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:48.356 00:04:48.356 real 0m13.764s 00:04:48.356 user 0m1.206s 00:04:48.356 sys 0m2.808s 00:04:48.356 19:26:38 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.356 19:26:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:48.356 ************************************ 00:04:48.356 END TEST driver 00:04:48.356 ************************************ 00:04:48.356 19:26:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.356 19:26:38 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:48.356 19:26:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.356 19:26:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.356 19:26:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.356 ************************************ 00:04:48.356 START TEST devices 00:04:48.356 ************************************ 00:04:48.356 19:26:38 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:48.356 * Looking for test storage... 00:04:48.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.356 19:26:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:48.356 19:26:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:48.356 19:26:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.356 19:26:38 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:49.289 19:26:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:49.289 19:26:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.289 19:26:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:49.289 19:26:39 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.289 No valid GPT data, bailing 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.289 19:26:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.289 19:26:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.289 19:26:40 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:49.289 No valid GPT data, bailing 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.289 19:26:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.289 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:49.289 19:26:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:49.548 No valid GPT data, bailing 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:49.548 No valid GPT data, bailing 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:49.548 No valid GPT data, bailing 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:49.548 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:49.548 19:26:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:49.807 No valid GPT data, bailing 00:04:49.807 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:49.807 19:26:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.807 19:26:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.807 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:49.807 19:26:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:49.807 19:26:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:49.807 19:26:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:49.807 19:26:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:49.807 19:26:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:49.807 19:26:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:49.807 19:26:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.807 19:26:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.807 19:26:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.807 19:26:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.807 ************************************ 00:04:49.807 START TEST nvme_mount 00:04:49.807 ************************************ 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.807 19:26:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:50.742 Creating new GPT entries in memory. 00:04:50.742 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.742 other utilities. 00:04:50.742 19:26:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.742 19:26:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.742 19:26:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.742 19:26:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.742 19:26:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:51.726 Creating new GPT entries in memory. 00:04:51.726 The operation has completed successfully. 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59546 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.726 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:51.985 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.244 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.244 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.244 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.244 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.244 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.244 19:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.502 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.502 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.758 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.759 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.759 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.759 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.759 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.016 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.016 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.016 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.016 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.016 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:53.016 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:53.016 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.016 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.016 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.016 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.274 19:26:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.274 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.274 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.274 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.274 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.274 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.274 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.532 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.532 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.532 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.532 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.532 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.532 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.099 19:26:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.665 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.666 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.274 19:26:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.274 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.274 00:04:55.274 real 0m5.587s 00:04:55.274 user 0m1.495s 00:04:55.274 sys 0m1.812s 00:04:55.274 19:26:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.274 ************************************ 00:04:55.274 END TEST nvme_mount 00:04:55.274 ************************************ 00:04:55.274 19:26:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:55.274 19:26:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:55.274 19:26:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.274 19:26:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.274 19:26:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.274 19:26:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.274 ************************************ 00:04:55.274 START TEST dm_mount 00:04:55.274 ************************************ 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.274 19:26:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.650 Creating new GPT entries in memory. 00:04:56.650 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.650 other utilities. 00:04:56.650 19:26:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.650 19:26:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.650 19:26:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.650 19:26:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.650 19:26:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:57.584 Creating new GPT entries in memory. 00:04:57.584 The operation has completed successfully. 00:04:57.584 19:26:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.584 19:26:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.584 19:26:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.584 19:26:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.584 19:26:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:58.521 The operation has completed successfully. 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60177 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.521 19:26:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.786 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.786 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:58.786 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.786 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.786 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.786 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.044 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.044 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.044 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.044 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.044 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.044 19:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.301 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.301 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.559 19:26:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.817 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.817 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:59.817 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:59.817 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.817 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.817 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.077 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.077 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.077 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.077 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.077 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.077 19:26:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.644 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.644 00:05:00.644 real 0m5.352s 00:05:00.644 user 0m1.012s 00:05:00.644 sys 0m1.291s 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.644 19:26:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:00.644 ************************************ 00:05:00.644 END TEST dm_mount 00:05:00.644 ************************************ 00:05:00.902 19:26:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.902 19:26:51 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.161 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.161 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.161 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.161 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.161 19:26:51 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.161 00:05:01.161 real 0m13.114s 00:05:01.161 user 0m3.433s 00:05:01.161 sys 0m4.081s 00:05:01.161 19:26:51 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.161 19:26:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.161 ************************************ 00:05:01.161 END TEST devices 00:05:01.161 ************************************ 00:05:01.161 19:26:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:01.161 00:05:01.161 real 0m46.864s 00:05:01.161 user 0m10.836s 00:05:01.161 sys 0m16.053s 00:05:01.161 19:26:51 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.161 19:26:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.161 ************************************ 00:05:01.161 END TEST setup.sh 00:05:01.161 ************************************ 00:05:01.161 19:26:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.161 19:26:51 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.341 Hugepages 00:05:02.341 node hugesize free / total 00:05:02.341 node0 1048576kB 0 / 0 00:05:02.341 node0 2048kB 2048 / 2048 00:05:02.341 00:05:02.341 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.341 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.341 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:02.606 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.606 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:02.606 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:02.607 19:26:53 -- spdk/autotest.sh@130 -- # uname -s 00:05:02.607 19:26:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:02.607 19:26:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:02.607 19:26:53 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.111 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.111 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.111 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.111 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.111 19:26:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:05.043 19:26:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:05.043 19:26:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:05.043 19:26:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.043 19:26:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:05.043 19:26:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:05.043 19:26:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:05.043 19:26:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.043 19:26:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:05.043 19:26:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.301 19:26:55 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:05.301 19:26:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:05.301 19:26:55 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.879 Waiting for block devices as requested 00:05:05.879 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.137 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.138 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.138 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.401 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:11.401 19:27:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:11.401 19:27:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:11.401 19:27:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.401 19:27:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:11.401 19:27:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.401 19:27:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:11.401 19:27:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.401 19:27:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:11.401 19:27:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:11.401 19:27:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:11.401 19:27:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:11.401 19:27:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:11.401 19:27:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:11.402 19:27:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1557 -- # continue 00:05:11.402 19:27:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:11.402 19:27:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:11.402 19:27:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1557 -- # continue 00:05:11.402 19:27:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:11.402 19:27:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:11.402 19:27:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1557 -- # continue 00:05:11.402 19:27:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:11.402 19:27:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:05:11.402 19:27:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:11.402 19:27:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:11.402 19:27:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:11.402 19:27:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:11.402 19:27:02 -- common/autotest_common.sh@1557 -- # continue 00:05:11.402 19:27:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:11.402 19:27:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.402 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:05:11.661 19:27:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:11.661 19:27:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.661 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:05:11.661 19:27:02 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.794 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.794 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.794 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.794 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.054 19:27:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:13.054 19:27:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.054 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.054 19:27:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:13.054 19:27:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:13.054 19:27:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.054 19:27:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:13.054 19:27:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:13.054 19:27:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:13.054 19:27:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:13.054 19:27:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:13.054 19:27:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.054 19:27:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:13.054 19:27:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.054 19:27:03 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:13.054 19:27:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:13.054 19:27:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:13.054 19:27:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.054 19:27:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:13.054 19:27:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.054 19:27:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:13.054 19:27:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.054 19:27:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:13.054 19:27:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:13.054 19:27:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.054 19:27:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:13.054 19:27:03 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:13.054 19:27:03 -- common/autotest_common.sh@1593 -- # return 0 00:05:13.054 19:27:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:13.054 19:27:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:13.054 19:27:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:13.054 19:27:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:13.054 19:27:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:13.054 19:27:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.054 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.054 19:27:03 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:13.054 19:27:03 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.054 19:27:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.054 19:27:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.054 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.054 ************************************ 00:05:13.054 START TEST env 00:05:13.054 ************************************ 00:05:13.054 19:27:03 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.312 * Looking for test storage... 00:05:13.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:13.312 19:27:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.312 19:27:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.312 19:27:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.312 19:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.312 ************************************ 00:05:13.312 START TEST env_memory 00:05:13.312 ************************************ 00:05:13.312 19:27:03 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.312 00:05:13.312 00:05:13.312 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.312 http://cunit.sourceforge.net/ 00:05:13.312 00:05:13.312 00:05:13.312 Suite: memory 00:05:13.312 Test: alloc and free memory map ...[2024-07-15 19:27:03.964625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.312 passed 00:05:13.312 Test: mem map translation ...[2024-07-15 19:27:04.032159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.312 [2024-07-15 19:27:04.032280] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.312 [2024-07-15 19:27:04.032433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.312 [2024-07-15 19:27:04.032513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.592 passed 00:05:13.592 Test: mem map registration ...[2024-07-15 19:27:04.138544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:13.592 [2024-07-15 19:27:04.138655] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:13.592 passed 00:05:13.592 Test: mem map adjacent registrations ...passed 00:05:13.592 00:05:13.592 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.592 suites 1 1 n/a 0 0 00:05:13.592 tests 4 4 4 0 0 00:05:13.592 asserts 152 152 152 0 n/a 00:05:13.592 00:05:13.592 Elapsed time = 0.373 seconds 00:05:13.592 00:05:13.592 real 0m0.415s 00:05:13.592 user 0m0.380s 00:05:13.592 sys 0m0.031s 00:05:13.592 19:27:04 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.592 19:27:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.592 ************************************ 00:05:13.592 END TEST env_memory 00:05:13.592 ************************************ 00:05:13.592 19:27:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:13.592 19:27:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.592 19:27:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.592 19:27:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.592 19:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.592 ************************************ 00:05:13.592 START TEST env_vtophys 00:05:13.592 ************************************ 00:05:13.592 19:27:04 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.851 EAL: lib.eal log level changed from notice to debug 00:05:13.851 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 1 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 2 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 3 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 4 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 5 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 6 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 7 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 8 as core 0 on socket 0 00:05:13.851 EAL: Detected lcore 9 as core 0 on socket 0 00:05:13.851 EAL: Maximum logical cores by configuration: 128 00:05:13.851 EAL: Detected CPU lcores: 10 00:05:13.851 EAL: Detected NUMA nodes: 1 00:05:13.851 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:13.851 EAL: Detected shared linkage of DPDK 00:05:13.851 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.851 EAL: Selected IOVA mode 'PA' 00:05:13.851 EAL: Probing VFIO support... 00:05:13.851 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.851 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:13.851 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.851 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.851 EAL: Setting up physically contiguous memory... 00:05:13.851 EAL: Setting maximum number of open files to 524288 00:05:13.851 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.851 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.851 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.851 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.851 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.851 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.851 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.851 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.851 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.851 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.851 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.851 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.851 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.851 EAL: Hugepages will be freed exactly as allocated. 00:05:13.851 EAL: No shared files mode enabled, IPC is disabled 00:05:13.851 EAL: No shared files mode enabled, IPC is disabled 00:05:13.851 EAL: TSC frequency is ~2100000 KHz 00:05:13.851 EAL: Main lcore 0 is ready (tid=7f5ee121fa40;cpuset=[0]) 00:05:13.851 EAL: Trying to obtain current memory policy. 00:05:13.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.851 EAL: Restoring previous memory policy: 0 00:05:13.851 EAL: request: mp_malloc_sync 00:05:13.851 EAL: No shared files mode enabled, IPC is disabled 00:05:13.851 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.851 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.851 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.851 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.851 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:13.851 00:05:13.851 00:05:13.851 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.851 http://cunit.sourceforge.net/ 00:05:13.851 00:05:13.851 00:05:13.851 Suite: components_suite 00:05:14.418 Test: vtophys_malloc_test ...passed 00:05:14.418 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.418 EAL: Restoring previous memory policy: 4 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.418 EAL: Trying to obtain current memory policy. 00:05:14.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.418 EAL: Restoring previous memory policy: 4 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.418 EAL: Trying to obtain current memory policy. 00:05:14.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.418 EAL: Restoring previous memory policy: 4 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.418 EAL: Trying to obtain current memory policy. 00:05:14.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.418 EAL: Restoring previous memory policy: 4 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.418 EAL: request: mp_malloc_sync 00:05:14.418 EAL: No shared files mode enabled, IPC is disabled 00:05:14.418 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.677 EAL: Trying to obtain current memory policy. 00:05:14.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.677 EAL: Restoring previous memory policy: 4 00:05:14.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.677 EAL: request: mp_malloc_sync 00:05:14.677 EAL: No shared files mode enabled, IPC is disabled 00:05:14.677 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.677 EAL: request: mp_malloc_sync 00:05:14.677 EAL: No shared files mode enabled, IPC is disabled 00:05:14.677 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.677 EAL: Trying to obtain current memory policy. 00:05:14.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.677 EAL: Restoring previous memory policy: 4 00:05:14.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.677 EAL: request: mp_malloc_sync 00:05:14.677 EAL: No shared files mode enabled, IPC is disabled 00:05:14.677 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.935 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.935 EAL: request: mp_malloc_sync 00:05:14.935 EAL: No shared files mode enabled, IPC is disabled 00:05:14.935 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.935 EAL: Trying to obtain current memory policy. 00:05:14.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.935 EAL: Restoring previous memory policy: 4 00:05:14.935 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.935 EAL: request: mp_malloc_sync 00:05:14.935 EAL: No shared files mode enabled, IPC is disabled 00:05:14.935 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.193 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.193 EAL: request: mp_malloc_sync 00:05:15.193 EAL: No shared files mode enabled, IPC is disabled 00:05:15.193 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.452 EAL: Trying to obtain current memory policy. 00:05:15.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.711 EAL: Restoring previous memory policy: 4 00:05:15.711 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.711 EAL: request: mp_malloc_sync 00:05:15.711 EAL: No shared files mode enabled, IPC is disabled 00:05:15.711 EAL: Heap on socket 0 was expanded by 258MB 00:05:16.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.279 EAL: request: mp_malloc_sync 00:05:16.279 EAL: No shared files mode enabled, IPC is disabled 00:05:16.279 EAL: Heap on socket 0 was shrunk by 258MB 00:05:16.537 EAL: Trying to obtain current memory policy. 00:05:16.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.795 EAL: Restoring previous memory policy: 4 00:05:16.795 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.795 EAL: request: mp_malloc_sync 00:05:16.795 EAL: No shared files mode enabled, IPC is disabled 00:05:16.795 EAL: Heap on socket 0 was expanded by 514MB 00:05:18.170 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.170 EAL: request: mp_malloc_sync 00:05:18.170 EAL: No shared files mode enabled, IPC is disabled 00:05:18.170 EAL: Heap on socket 0 was shrunk by 514MB 00:05:18.738 EAL: Trying to obtain current memory policy. 00:05:18.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.996 EAL: Restoring previous memory policy: 4 00:05:18.996 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.996 EAL: request: mp_malloc_sync 00:05:18.996 EAL: No shared files mode enabled, IPC is disabled 00:05:18.996 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.557 EAL: request: mp_malloc_sync 00:05:21.557 EAL: No shared files mode enabled, IPC is disabled 00:05:21.557 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.458 passed 00:05:23.458 00:05:23.458 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.458 suites 1 1 n/a 0 0 00:05:23.458 tests 2 2 2 0 0 00:05:23.458 asserts 5390 5390 5390 0 n/a 00:05:23.458 00:05:23.458 Elapsed time = 9.280 seconds 00:05:23.458 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.458 EAL: request: mp_malloc_sync 00:05:23.458 EAL: No shared files mode enabled, IPC is disabled 00:05:23.458 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.458 EAL: No shared files mode enabled, IPC is disabled 00:05:23.458 EAL: No shared files mode enabled, IPC is disabled 00:05:23.458 EAL: No shared files mode enabled, IPC is disabled 00:05:23.458 00:05:23.458 real 0m9.606s 00:05:23.458 user 0m8.520s 00:05:23.458 sys 0m0.916s 00:05:23.458 19:27:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.458 19:27:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.458 ************************************ 00:05:23.458 END TEST env_vtophys 00:05:23.458 ************************************ 00:05:23.458 19:27:14 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.458 19:27:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.458 19:27:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.458 19:27:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.458 19:27:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.458 ************************************ 00:05:23.458 START TEST env_pci 00:05:23.458 ************************************ 00:05:23.458 19:27:14 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.458 00:05:23.458 00:05:23.458 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.458 http://cunit.sourceforge.net/ 00:05:23.458 00:05:23.458 00:05:23.458 Suite: pci 00:05:23.458 Test: pci_hook ...[2024-07-15 19:27:14.057900] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62038 has claimed it 00:05:23.458 passed 00:05:23.458 00:05:23.458 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.458 suites 1 1 n/a 0 0 00:05:23.458 tests 1 1 1 0 0 00:05:23.458 asserts 25 25 25 0 n/a 00:05:23.458 00:05:23.458 Elapsed time = 0.008 seconds 00:05:23.458 EAL: Cannot find device (10000:00:01.0) 00:05:23.458 EAL: Failed to attach device on primary process 00:05:23.458 00:05:23.458 real 0m0.086s 00:05:23.458 user 0m0.035s 00:05:23.458 sys 0m0.050s 00:05:23.458 19:27:14 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.458 19:27:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.458 ************************************ 00:05:23.458 END TEST env_pci 00:05:23.458 ************************************ 00:05:23.458 19:27:14 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.458 19:27:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.458 19:27:14 env -- env/env.sh@15 -- # uname 00:05:23.458 19:27:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.458 19:27:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.458 19:27:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.458 19:27:14 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:23.458 19:27:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.458 19:27:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.458 ************************************ 00:05:23.458 START TEST env_dpdk_post_init 00:05:23.458 ************************************ 00:05:23.458 19:27:14 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.458 EAL: Detected CPU lcores: 10 00:05:23.458 EAL: Detected NUMA nodes: 1 00:05:23.458 EAL: Detected shared linkage of DPDK 00:05:23.715 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.715 EAL: Selected IOVA mode 'PA' 00:05:23.716 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.716 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:23.716 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:23.716 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:23.716 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:23.716 Starting DPDK initialization... 00:05:23.716 Starting SPDK post initialization... 00:05:23.716 SPDK NVMe probe 00:05:23.716 Attaching to 0000:00:10.0 00:05:23.716 Attaching to 0000:00:11.0 00:05:23.716 Attaching to 0000:00:12.0 00:05:23.716 Attaching to 0000:00:13.0 00:05:23.716 Attached to 0000:00:10.0 00:05:23.716 Attached to 0000:00:11.0 00:05:23.716 Attached to 0000:00:13.0 00:05:23.716 Attached to 0000:00:12.0 00:05:23.716 Cleaning up... 00:05:23.716 00:05:23.716 real 0m0.341s 00:05:23.716 user 0m0.124s 00:05:23.716 sys 0m0.118s 00:05:23.716 19:27:14 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.716 19:27:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.716 ************************************ 00:05:23.716 END TEST env_dpdk_post_init 00:05:23.716 ************************************ 00:05:23.975 19:27:14 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.975 19:27:14 env -- env/env.sh@26 -- # uname 00:05:23.975 19:27:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.975 19:27:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.975 19:27:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.975 19:27:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.975 19:27:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.975 ************************************ 00:05:23.975 START TEST env_mem_callbacks 00:05:23.975 ************************************ 00:05:23.975 19:27:14 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.975 EAL: Detected CPU lcores: 10 00:05:23.975 EAL: Detected NUMA nodes: 1 00:05:23.975 EAL: Detected shared linkage of DPDK 00:05:23.975 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.975 EAL: Selected IOVA mode 'PA' 00:05:23.975 00:05:23.975 00:05:23.975 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.975 http://cunit.sourceforge.net/ 00:05:23.975 00:05:23.975 00:05:23.975 Suite: memory 00:05:23.975 Test: test ... 00:05:23.975 register 0x200000200000 2097152 00:05:23.975 malloc 3145728 00:05:23.975 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.975 register 0x200000400000 4194304 00:05:23.975 buf 0x2000004fffc0 len 3145728 PASSED 00:05:23.975 malloc 64 00:05:23.975 buf 0x2000004ffec0 len 64 PASSED 00:05:23.975 malloc 4194304 00:05:23.975 register 0x200000800000 6291456 00:05:24.233 buf 0x2000009fffc0 len 4194304 PASSED 00:05:24.233 free 0x2000004fffc0 3145728 00:05:24.233 free 0x2000004ffec0 64 00:05:24.233 unregister 0x200000400000 4194304 PASSED 00:05:24.233 free 0x2000009fffc0 4194304 00:05:24.233 unregister 0x200000800000 6291456 PASSED 00:05:24.233 malloc 8388608 00:05:24.233 register 0x200000400000 10485760 00:05:24.233 buf 0x2000005fffc0 len 8388608 PASSED 00:05:24.233 free 0x2000005fffc0 8388608 00:05:24.233 unregister 0x200000400000 10485760 PASSED 00:05:24.233 passed 00:05:24.233 00:05:24.233 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.233 suites 1 1 n/a 0 0 00:05:24.233 tests 1 1 1 0 0 00:05:24.233 asserts 15 15 15 0 n/a 00:05:24.233 00:05:24.233 Elapsed time = 0.117 seconds 00:05:24.233 00:05:24.233 real 0m0.327s 00:05:24.233 user 0m0.145s 00:05:24.233 sys 0m0.080s 00:05:24.233 19:27:14 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.233 19:27:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:24.233 ************************************ 00:05:24.233 END TEST env_mem_callbacks 00:05:24.233 ************************************ 00:05:24.233 19:27:14 env -- common/autotest_common.sh@1142 -- # return 0 00:05:24.233 00:05:24.233 real 0m11.144s 00:05:24.233 user 0m9.330s 00:05:24.233 sys 0m1.437s 00:05:24.233 19:27:14 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.233 ************************************ 00:05:24.233 END TEST env 00:05:24.233 ************************************ 00:05:24.233 19:27:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.233 19:27:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.233 19:27:14 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.233 19:27:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.233 19:27:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.233 19:27:14 -- common/autotest_common.sh@10 -- # set +x 00:05:24.233 ************************************ 00:05:24.233 START TEST rpc 00:05:24.233 ************************************ 00:05:24.233 19:27:14 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.491 * Looking for test storage... 00:05:24.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.491 19:27:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62157 00:05:24.491 19:27:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.491 19:27:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62157 00:05:24.491 19:27:15 rpc -- common/autotest_common.sh@829 -- # '[' -z 62157 ']' 00:05:24.491 19:27:15 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.491 19:27:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:24.491 19:27:15 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.491 19:27:15 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.491 19:27:15 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.491 19:27:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.491 [2024-07-15 19:27:15.234667] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:24.491 [2024-07-15 19:27:15.234853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:05:24.749 [2024-07-15 19:27:15.424332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.006 [2024-07-15 19:27:15.734681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.006 [2024-07-15 19:27:15.734740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62157' to capture a snapshot of events at runtime. 00:05:25.007 [2024-07-15 19:27:15.734757] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.007 [2024-07-15 19:27:15.734769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.007 [2024-07-15 19:27:15.734792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62157 for offline analysis/debug. 00:05:25.007 [2024-07-15 19:27:15.734850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.380 19:27:16 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.380 19:27:16 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.380 19:27:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.380 19:27:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.380 19:27:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.380 19:27:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.380 19:27:16 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.380 19:27:16 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.380 19:27:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.380 ************************************ 00:05:26.380 START TEST rpc_integrity 00:05:26.380 ************************************ 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.381 { 00:05:26.381 "name": "Malloc0", 00:05:26.381 "aliases": [ 00:05:26.381 "dd35cdb9-3e18-4213-a439-ba3f58dd4db5" 00:05:26.381 ], 00:05:26.381 "product_name": "Malloc disk", 00:05:26.381 "block_size": 512, 00:05:26.381 "num_blocks": 16384, 00:05:26.381 "uuid": "dd35cdb9-3e18-4213-a439-ba3f58dd4db5", 00:05:26.381 "assigned_rate_limits": { 00:05:26.381 "rw_ios_per_sec": 0, 00:05:26.381 "rw_mbytes_per_sec": 0, 00:05:26.381 "r_mbytes_per_sec": 0, 00:05:26.381 "w_mbytes_per_sec": 0 00:05:26.381 }, 00:05:26.381 "claimed": false, 00:05:26.381 "zoned": false, 00:05:26.381 "supported_io_types": { 00:05:26.381 "read": true, 00:05:26.381 "write": true, 00:05:26.381 "unmap": true, 00:05:26.381 "flush": true, 00:05:26.381 "reset": true, 00:05:26.381 "nvme_admin": false, 00:05:26.381 "nvme_io": false, 00:05:26.381 "nvme_io_md": false, 00:05:26.381 "write_zeroes": true, 00:05:26.381 "zcopy": true, 00:05:26.381 "get_zone_info": false, 00:05:26.381 "zone_management": false, 00:05:26.381 "zone_append": false, 00:05:26.381 "compare": false, 00:05:26.381 "compare_and_write": false, 00:05:26.381 "abort": true, 00:05:26.381 "seek_hole": false, 00:05:26.381 "seek_data": false, 00:05:26.381 "copy": true, 00:05:26.381 "nvme_iov_md": false 00:05:26.381 }, 00:05:26.381 "memory_domains": [ 00:05:26.381 { 00:05:26.381 "dma_device_id": "system", 00:05:26.381 "dma_device_type": 1 00:05:26.381 }, 00:05:26.381 { 00:05:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.381 "dma_device_type": 2 00:05:26.381 } 00:05:26.381 ], 00:05:26.381 "driver_specific": {} 00:05:26.381 } 00:05:26.381 ]' 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 [2024-07-15 19:27:16.935041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.381 [2024-07-15 19:27:16.935135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.381 [2024-07-15 19:27:16.935174] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:26.381 [2024-07-15 19:27:16.935189] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.381 [2024-07-15 19:27:16.937978] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.381 [2024-07-15 19:27:16.938018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.381 Passthru0 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.381 { 00:05:26.381 "name": "Malloc0", 00:05:26.381 "aliases": [ 00:05:26.381 "dd35cdb9-3e18-4213-a439-ba3f58dd4db5" 00:05:26.381 ], 00:05:26.381 "product_name": "Malloc disk", 00:05:26.381 "block_size": 512, 00:05:26.381 "num_blocks": 16384, 00:05:26.381 "uuid": "dd35cdb9-3e18-4213-a439-ba3f58dd4db5", 00:05:26.381 "assigned_rate_limits": { 00:05:26.381 "rw_ios_per_sec": 0, 00:05:26.381 "rw_mbytes_per_sec": 0, 00:05:26.381 "r_mbytes_per_sec": 0, 00:05:26.381 "w_mbytes_per_sec": 0 00:05:26.381 }, 00:05:26.381 "claimed": true, 00:05:26.381 "claim_type": "exclusive_write", 00:05:26.381 "zoned": false, 00:05:26.381 "supported_io_types": { 00:05:26.381 "read": true, 00:05:26.381 "write": true, 00:05:26.381 "unmap": true, 00:05:26.381 "flush": true, 00:05:26.381 "reset": true, 00:05:26.381 "nvme_admin": false, 00:05:26.381 "nvme_io": false, 00:05:26.381 "nvme_io_md": false, 00:05:26.381 "write_zeroes": true, 00:05:26.381 "zcopy": true, 00:05:26.381 "get_zone_info": false, 00:05:26.381 "zone_management": false, 00:05:26.381 "zone_append": false, 00:05:26.381 "compare": false, 00:05:26.381 "compare_and_write": false, 00:05:26.381 "abort": true, 00:05:26.381 "seek_hole": false, 00:05:26.381 "seek_data": false, 00:05:26.381 "copy": true, 00:05:26.381 "nvme_iov_md": false 00:05:26.381 }, 00:05:26.381 "memory_domains": [ 00:05:26.381 { 00:05:26.381 "dma_device_id": "system", 00:05:26.381 "dma_device_type": 1 00:05:26.381 }, 00:05:26.381 { 00:05:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.381 "dma_device_type": 2 00:05:26.381 } 00:05:26.381 ], 00:05:26.381 "driver_specific": {} 00:05:26.381 }, 00:05:26.381 { 00:05:26.381 "name": "Passthru0", 00:05:26.381 "aliases": [ 00:05:26.381 "ab7c8708-0550-5082-a9be-8a700e89271f" 00:05:26.381 ], 00:05:26.381 "product_name": "passthru", 00:05:26.381 "block_size": 512, 00:05:26.381 "num_blocks": 16384, 00:05:26.381 "uuid": "ab7c8708-0550-5082-a9be-8a700e89271f", 00:05:26.381 "assigned_rate_limits": { 00:05:26.381 "rw_ios_per_sec": 0, 00:05:26.381 "rw_mbytes_per_sec": 0, 00:05:26.381 "r_mbytes_per_sec": 0, 00:05:26.381 "w_mbytes_per_sec": 0 00:05:26.381 }, 00:05:26.381 "claimed": false, 00:05:26.381 "zoned": false, 00:05:26.381 "supported_io_types": { 00:05:26.381 "read": true, 00:05:26.381 "write": true, 00:05:26.381 "unmap": true, 00:05:26.381 "flush": true, 00:05:26.381 "reset": true, 00:05:26.381 "nvme_admin": false, 00:05:26.381 "nvme_io": false, 00:05:26.381 "nvme_io_md": false, 00:05:26.381 "write_zeroes": true, 00:05:26.381 "zcopy": true, 00:05:26.381 "get_zone_info": false, 00:05:26.381 "zone_management": false, 00:05:26.381 "zone_append": false, 00:05:26.381 "compare": false, 00:05:26.381 "compare_and_write": false, 00:05:26.381 "abort": true, 00:05:26.381 "seek_hole": false, 00:05:26.381 "seek_data": false, 00:05:26.381 "copy": true, 00:05:26.381 "nvme_iov_md": false 00:05:26.381 }, 00:05:26.381 "memory_domains": [ 00:05:26.381 { 00:05:26.381 "dma_device_id": "system", 00:05:26.381 "dma_device_type": 1 00:05:26.381 }, 00:05:26.381 { 00:05:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.381 "dma_device_type": 2 00:05:26.381 } 00:05:26.381 ], 00:05:26.381 "driver_specific": { 00:05:26.381 "passthru": { 00:05:26.381 "name": "Passthru0", 00:05:26.381 "base_bdev_name": "Malloc0" 00:05:26.381 } 00:05:26.381 } 00:05:26.381 } 00:05:26.381 ]' 00:05:26.381 19:27:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.381 19:27:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.381 00:05:26.381 real 0m0.370s 00:05:26.381 user 0m0.206s 00:05:26.381 sys 0m0.053s 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.381 19:27:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 ************************************ 00:05:26.381 END TEST rpc_integrity 00:05:26.381 ************************************ 00:05:26.640 19:27:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.640 19:27:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.640 19:27:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.640 19:27:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.640 19:27:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.640 ************************************ 00:05:26.640 START TEST rpc_plugins 00:05:26.640 ************************************ 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.640 { 00:05:26.640 "name": "Malloc1", 00:05:26.640 "aliases": [ 00:05:26.640 "3f53b1f7-17e1-4f8b-8eff-bcd0e8f21738" 00:05:26.640 ], 00:05:26.640 "product_name": "Malloc disk", 00:05:26.640 "block_size": 4096, 00:05:26.640 "num_blocks": 256, 00:05:26.640 "uuid": "3f53b1f7-17e1-4f8b-8eff-bcd0e8f21738", 00:05:26.640 "assigned_rate_limits": { 00:05:26.640 "rw_ios_per_sec": 0, 00:05:26.640 "rw_mbytes_per_sec": 0, 00:05:26.640 "r_mbytes_per_sec": 0, 00:05:26.640 "w_mbytes_per_sec": 0 00:05:26.640 }, 00:05:26.640 "claimed": false, 00:05:26.640 "zoned": false, 00:05:26.640 "supported_io_types": { 00:05:26.640 "read": true, 00:05:26.640 "write": true, 00:05:26.640 "unmap": true, 00:05:26.640 "flush": true, 00:05:26.640 "reset": true, 00:05:26.640 "nvme_admin": false, 00:05:26.640 "nvme_io": false, 00:05:26.640 "nvme_io_md": false, 00:05:26.640 "write_zeroes": true, 00:05:26.640 "zcopy": true, 00:05:26.640 "get_zone_info": false, 00:05:26.640 "zone_management": false, 00:05:26.640 "zone_append": false, 00:05:26.640 "compare": false, 00:05:26.640 "compare_and_write": false, 00:05:26.640 "abort": true, 00:05:26.640 "seek_hole": false, 00:05:26.640 "seek_data": false, 00:05:26.640 "copy": true, 00:05:26.640 "nvme_iov_md": false 00:05:26.640 }, 00:05:26.640 "memory_domains": [ 00:05:26.640 { 00:05:26.640 "dma_device_id": "system", 00:05:26.640 "dma_device_type": 1 00:05:26.640 }, 00:05:26.640 { 00:05:26.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.640 "dma_device_type": 2 00:05:26.640 } 00:05:26.640 ], 00:05:26.640 "driver_specific": {} 00:05:26.640 } 00:05:26.640 ]' 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.640 19:27:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.640 00:05:26.640 real 0m0.137s 00:05:26.640 user 0m0.077s 00:05:26.640 sys 0m0.022s 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.640 ************************************ 00:05:26.640 END TEST rpc_plugins 00:05:26.640 ************************************ 00:05:26.640 19:27:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.641 19:27:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.641 19:27:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.641 19:27:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.641 19:27:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.641 19:27:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.641 ************************************ 00:05:26.641 START TEST rpc_trace_cmd_test 00:05:26.641 ************************************ 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.641 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62157", 00:05:26.641 "tpoint_group_mask": "0x8", 00:05:26.641 "iscsi_conn": { 00:05:26.641 "mask": "0x2", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "scsi": { 00:05:26.641 "mask": "0x4", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "bdev": { 00:05:26.641 "mask": "0x8", 00:05:26.641 "tpoint_mask": "0xffffffffffffffff" 00:05:26.641 }, 00:05:26.641 "nvmf_rdma": { 00:05:26.641 "mask": "0x10", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "nvmf_tcp": { 00:05:26.641 "mask": "0x20", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "ftl": { 00:05:26.641 "mask": "0x40", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "blobfs": { 00:05:26.641 "mask": "0x80", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "dsa": { 00:05:26.641 "mask": "0x200", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "thread": { 00:05:26.641 "mask": "0x400", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "nvme_pcie": { 00:05:26.641 "mask": "0x800", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "iaa": { 00:05:26.641 "mask": "0x1000", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "nvme_tcp": { 00:05:26.641 "mask": "0x2000", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "bdev_nvme": { 00:05:26.641 "mask": "0x4000", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 }, 00:05:26.641 "sock": { 00:05:26.641 "mask": "0x8000", 00:05:26.641 "tpoint_mask": "0x0" 00:05:26.641 } 00:05:26.641 }' 00:05:26.641 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.899 00:05:26.899 real 0m0.238s 00:05:26.899 user 0m0.200s 00:05:26.899 sys 0m0.029s 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.899 19:27:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.899 ************************************ 00:05:26.899 END TEST rpc_trace_cmd_test 00:05:26.899 ************************************ 00:05:26.899 19:27:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.899 19:27:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.899 19:27:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.899 19:27:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.899 19:27:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.899 19:27:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.899 19:27:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.899 ************************************ 00:05:26.899 START TEST rpc_daemon_integrity 00:05:26.899 ************************************ 00:05:26.899 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:26.899 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.899 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.899 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.899 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.899 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.158 { 00:05:27.158 "name": "Malloc2", 00:05:27.158 "aliases": [ 00:05:27.158 "ced00449-5234-4843-acb4-545f3d3d2d2e" 00:05:27.158 ], 00:05:27.158 "product_name": "Malloc disk", 00:05:27.158 "block_size": 512, 00:05:27.158 "num_blocks": 16384, 00:05:27.158 "uuid": "ced00449-5234-4843-acb4-545f3d3d2d2e", 00:05:27.158 "assigned_rate_limits": { 00:05:27.158 "rw_ios_per_sec": 0, 00:05:27.158 "rw_mbytes_per_sec": 0, 00:05:27.158 "r_mbytes_per_sec": 0, 00:05:27.158 "w_mbytes_per_sec": 0 00:05:27.158 }, 00:05:27.158 "claimed": false, 00:05:27.158 "zoned": false, 00:05:27.158 "supported_io_types": { 00:05:27.158 "read": true, 00:05:27.158 "write": true, 00:05:27.158 "unmap": true, 00:05:27.158 "flush": true, 00:05:27.158 "reset": true, 00:05:27.158 "nvme_admin": false, 00:05:27.158 "nvme_io": false, 00:05:27.158 "nvme_io_md": false, 00:05:27.158 "write_zeroes": true, 00:05:27.158 "zcopy": true, 00:05:27.158 "get_zone_info": false, 00:05:27.158 "zone_management": false, 00:05:27.158 "zone_append": false, 00:05:27.158 "compare": false, 00:05:27.158 "compare_and_write": false, 00:05:27.158 "abort": true, 00:05:27.158 "seek_hole": false, 00:05:27.158 "seek_data": false, 00:05:27.158 "copy": true, 00:05:27.158 "nvme_iov_md": false 00:05:27.158 }, 00:05:27.158 "memory_domains": [ 00:05:27.158 { 00:05:27.158 "dma_device_id": "system", 00:05:27.158 "dma_device_type": 1 00:05:27.158 }, 00:05:27.158 { 00:05:27.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.158 "dma_device_type": 2 00:05:27.158 } 00:05:27.158 ], 00:05:27.158 "driver_specific": {} 00:05:27.158 } 00:05:27.158 ]' 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.158 [2024-07-15 19:27:17.816676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.158 [2024-07-15 19:27:17.816753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.158 [2024-07-15 19:27:17.816781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:27.158 [2024-07-15 19:27:17.816806] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.158 [2024-07-15 19:27:17.819429] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.158 [2024-07-15 19:27:17.819467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.158 Passthru0 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.158 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.158 { 00:05:27.158 "name": "Malloc2", 00:05:27.158 "aliases": [ 00:05:27.158 "ced00449-5234-4843-acb4-545f3d3d2d2e" 00:05:27.158 ], 00:05:27.158 "product_name": "Malloc disk", 00:05:27.158 "block_size": 512, 00:05:27.158 "num_blocks": 16384, 00:05:27.158 "uuid": "ced00449-5234-4843-acb4-545f3d3d2d2e", 00:05:27.158 "assigned_rate_limits": { 00:05:27.158 "rw_ios_per_sec": 0, 00:05:27.158 "rw_mbytes_per_sec": 0, 00:05:27.158 "r_mbytes_per_sec": 0, 00:05:27.158 "w_mbytes_per_sec": 0 00:05:27.158 }, 00:05:27.158 "claimed": true, 00:05:27.158 "claim_type": "exclusive_write", 00:05:27.158 "zoned": false, 00:05:27.158 "supported_io_types": { 00:05:27.158 "read": true, 00:05:27.158 "write": true, 00:05:27.158 "unmap": true, 00:05:27.158 "flush": true, 00:05:27.158 "reset": true, 00:05:27.158 "nvme_admin": false, 00:05:27.158 "nvme_io": false, 00:05:27.158 "nvme_io_md": false, 00:05:27.158 "write_zeroes": true, 00:05:27.158 "zcopy": true, 00:05:27.158 "get_zone_info": false, 00:05:27.158 "zone_management": false, 00:05:27.158 "zone_append": false, 00:05:27.158 "compare": false, 00:05:27.158 "compare_and_write": false, 00:05:27.158 "abort": true, 00:05:27.158 "seek_hole": false, 00:05:27.158 "seek_data": false, 00:05:27.158 "copy": true, 00:05:27.158 "nvme_iov_md": false 00:05:27.158 }, 00:05:27.158 "memory_domains": [ 00:05:27.158 { 00:05:27.158 "dma_device_id": "system", 00:05:27.158 "dma_device_type": 1 00:05:27.158 }, 00:05:27.158 { 00:05:27.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.158 "dma_device_type": 2 00:05:27.158 } 00:05:27.158 ], 00:05:27.158 "driver_specific": {} 00:05:27.158 }, 00:05:27.158 { 00:05:27.158 "name": "Passthru0", 00:05:27.158 "aliases": [ 00:05:27.158 "093e94a2-a94d-548f-8c44-d3f1ae88d9d5" 00:05:27.158 ], 00:05:27.158 "product_name": "passthru", 00:05:27.158 "block_size": 512, 00:05:27.158 "num_blocks": 16384, 00:05:27.158 "uuid": "093e94a2-a94d-548f-8c44-d3f1ae88d9d5", 00:05:27.158 "assigned_rate_limits": { 00:05:27.158 "rw_ios_per_sec": 0, 00:05:27.158 "rw_mbytes_per_sec": 0, 00:05:27.158 "r_mbytes_per_sec": 0, 00:05:27.158 "w_mbytes_per_sec": 0 00:05:27.158 }, 00:05:27.158 "claimed": false, 00:05:27.158 "zoned": false, 00:05:27.158 "supported_io_types": { 00:05:27.158 "read": true, 00:05:27.158 "write": true, 00:05:27.158 "unmap": true, 00:05:27.158 "flush": true, 00:05:27.158 "reset": true, 00:05:27.158 "nvme_admin": false, 00:05:27.158 "nvme_io": false, 00:05:27.158 "nvme_io_md": false, 00:05:27.158 "write_zeroes": true, 00:05:27.158 "zcopy": true, 00:05:27.158 "get_zone_info": false, 00:05:27.158 "zone_management": false, 00:05:27.158 "zone_append": false, 00:05:27.158 "compare": false, 00:05:27.158 "compare_and_write": false, 00:05:27.158 "abort": true, 00:05:27.159 "seek_hole": false, 00:05:27.159 "seek_data": false, 00:05:27.159 "copy": true, 00:05:27.159 "nvme_iov_md": false 00:05:27.159 }, 00:05:27.159 "memory_domains": [ 00:05:27.159 { 00:05:27.159 "dma_device_id": "system", 00:05:27.159 "dma_device_type": 1 00:05:27.159 }, 00:05:27.159 { 00:05:27.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.159 "dma_device_type": 2 00:05:27.159 } 00:05:27.159 ], 00:05:27.159 "driver_specific": { 00:05:27.159 "passthru": { 00:05:27.159 "name": "Passthru0", 00:05:27.159 "base_bdev_name": "Malloc2" 00:05:27.159 } 00:05:27.159 } 00:05:27.159 } 00:05:27.159 ]' 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.159 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.416 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.416 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.416 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.416 19:27:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.416 00:05:27.416 real 0m0.315s 00:05:27.416 user 0m0.159s 00:05:27.416 sys 0m0.057s 00:05:27.416 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.416 19:27:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.416 ************************************ 00:05:27.416 END TEST rpc_daemon_integrity 00:05:27.416 ************************************ 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.416 19:27:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.416 19:27:18 rpc -- rpc/rpc.sh@84 -- # killprocess 62157 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@948 -- # '[' -z 62157 ']' 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@952 -- # kill -0 62157 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@953 -- # uname 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62157 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.416 killing process with pid 62157 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62157' 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@967 -- # kill 62157 00:05:27.416 19:27:18 rpc -- common/autotest_common.sh@972 -- # wait 62157 00:05:29.992 00:05:29.992 real 0m5.787s 00:05:29.992 user 0m6.262s 00:05:29.992 sys 0m0.912s 00:05:29.992 19:27:20 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.992 ************************************ 00:05:29.992 END TEST rpc 00:05:29.992 ************************************ 00:05:29.992 19:27:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.250 19:27:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.250 19:27:20 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:30.250 19:27:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.250 19:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.250 19:27:20 -- common/autotest_common.sh@10 -- # set +x 00:05:30.250 ************************************ 00:05:30.250 START TEST skip_rpc 00:05:30.250 ************************************ 00:05:30.250 19:27:20 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:30.250 * Looking for test storage... 00:05:30.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.250 19:27:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.250 19:27:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.250 19:27:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:30.250 19:27:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.250 19:27:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.250 19:27:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.250 ************************************ 00:05:30.250 START TEST skip_rpc 00:05:30.250 ************************************ 00:05:30.250 19:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:30.250 19:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62388 00:05:30.250 19:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:30.250 19:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.250 19:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:30.508 [2024-07-15 19:27:21.073163] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:30.508 [2024-07-15 19:27:21.073353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62388 ] 00:05:30.508 [2024-07-15 19:27:21.259062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.765 [2024-07-15 19:27:21.503673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62388 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62388 ']' 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62388 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62388 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.023 killing process with pid 62388 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62388' 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62388 00:05:36.023 19:27:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62388 00:05:37.923 00:05:37.923 real 0m7.733s 00:05:37.923 user 0m7.213s 00:05:37.923 sys 0m0.405s 00:05:37.923 19:27:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.923 ************************************ 00:05:37.923 END TEST skip_rpc 00:05:37.923 ************************************ 00:05:37.923 19:27:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.181 19:27:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.181 19:27:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:38.181 19:27:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.181 19:27:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.181 19:27:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.181 ************************************ 00:05:38.181 START TEST skip_rpc_with_json 00:05:38.181 ************************************ 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62493 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62493 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62493 ']' 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.181 19:27:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.181 [2024-07-15 19:27:28.863109] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:38.181 [2024-07-15 19:27:28.863303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62493 ] 00:05:38.438 [2024-07-15 19:27:29.040654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.696 [2024-07-15 19:27:29.299339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.629 [2024-07-15 19:27:30.276864] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:39.629 request: 00:05:39.629 { 00:05:39.629 "trtype": "tcp", 00:05:39.629 "method": "nvmf_get_transports", 00:05:39.629 "req_id": 1 00:05:39.629 } 00:05:39.629 Got JSON-RPC error response 00:05:39.629 response: 00:05:39.629 { 00:05:39.629 "code": -19, 00:05:39.629 "message": "No such device" 00:05:39.629 } 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.629 [2024-07-15 19:27:30.288971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.629 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.886 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.886 19:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.886 { 00:05:39.886 "subsystems": [ 00:05:39.886 { 00:05:39.886 "subsystem": "keyring", 00:05:39.886 "config": [] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "iobuf", 00:05:39.886 "config": [ 00:05:39.886 { 00:05:39.886 "method": "iobuf_set_options", 00:05:39.886 "params": { 00:05:39.886 "small_pool_count": 8192, 00:05:39.886 "large_pool_count": 1024, 00:05:39.886 "small_bufsize": 8192, 00:05:39.886 "large_bufsize": 135168 00:05:39.886 } 00:05:39.886 } 00:05:39.886 ] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "sock", 00:05:39.886 "config": [ 00:05:39.886 { 00:05:39.886 "method": "sock_set_default_impl", 00:05:39.886 "params": { 00:05:39.886 "impl_name": "posix" 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "sock_impl_set_options", 00:05:39.886 "params": { 00:05:39.886 "impl_name": "ssl", 00:05:39.886 "recv_buf_size": 4096, 00:05:39.886 "send_buf_size": 4096, 00:05:39.886 "enable_recv_pipe": true, 00:05:39.886 "enable_quickack": false, 00:05:39.886 "enable_placement_id": 0, 00:05:39.886 "enable_zerocopy_send_server": true, 00:05:39.886 "enable_zerocopy_send_client": false, 00:05:39.886 "zerocopy_threshold": 0, 00:05:39.886 "tls_version": 0, 00:05:39.886 "enable_ktls": false 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "sock_impl_set_options", 00:05:39.886 "params": { 00:05:39.886 "impl_name": "posix", 00:05:39.886 "recv_buf_size": 2097152, 00:05:39.886 "send_buf_size": 2097152, 00:05:39.886 "enable_recv_pipe": true, 00:05:39.886 "enable_quickack": false, 00:05:39.886 "enable_placement_id": 0, 00:05:39.886 "enable_zerocopy_send_server": true, 00:05:39.886 "enable_zerocopy_send_client": false, 00:05:39.886 "zerocopy_threshold": 0, 00:05:39.886 "tls_version": 0, 00:05:39.886 "enable_ktls": false 00:05:39.886 } 00:05:39.886 } 00:05:39.886 ] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "vmd", 00:05:39.886 "config": [] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "accel", 00:05:39.886 "config": [ 00:05:39.886 { 00:05:39.886 "method": "accel_set_options", 00:05:39.886 "params": { 00:05:39.886 "small_cache_size": 128, 00:05:39.886 "large_cache_size": 16, 00:05:39.886 "task_count": 2048, 00:05:39.886 "sequence_count": 2048, 00:05:39.886 "buf_count": 2048 00:05:39.886 } 00:05:39.886 } 00:05:39.886 ] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "bdev", 00:05:39.886 "config": [ 00:05:39.886 { 00:05:39.886 "method": "bdev_set_options", 00:05:39.886 "params": { 00:05:39.886 "bdev_io_pool_size": 65535, 00:05:39.886 "bdev_io_cache_size": 256, 00:05:39.886 "bdev_auto_examine": true, 00:05:39.886 "iobuf_small_cache_size": 128, 00:05:39.886 "iobuf_large_cache_size": 16 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "bdev_raid_set_options", 00:05:39.886 "params": { 00:05:39.886 "process_window_size_kb": 1024 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "bdev_iscsi_set_options", 00:05:39.886 "params": { 00:05:39.886 "timeout_sec": 30 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "bdev_nvme_set_options", 00:05:39.886 "params": { 00:05:39.886 "action_on_timeout": "none", 00:05:39.886 "timeout_us": 0, 00:05:39.886 "timeout_admin_us": 0, 00:05:39.886 "keep_alive_timeout_ms": 10000, 00:05:39.886 "arbitration_burst": 0, 00:05:39.886 "low_priority_weight": 0, 00:05:39.886 "medium_priority_weight": 0, 00:05:39.886 "high_priority_weight": 0, 00:05:39.886 "nvme_adminq_poll_period_us": 10000, 00:05:39.886 "nvme_ioq_poll_period_us": 0, 00:05:39.886 "io_queue_requests": 0, 00:05:39.886 "delay_cmd_submit": true, 00:05:39.886 "transport_retry_count": 4, 00:05:39.886 "bdev_retry_count": 3, 00:05:39.886 "transport_ack_timeout": 0, 00:05:39.886 "ctrlr_loss_timeout_sec": 0, 00:05:39.886 "reconnect_delay_sec": 0, 00:05:39.886 "fast_io_fail_timeout_sec": 0, 00:05:39.886 "disable_auto_failback": false, 00:05:39.886 "generate_uuids": false, 00:05:39.886 "transport_tos": 0, 00:05:39.886 "nvme_error_stat": false, 00:05:39.886 "rdma_srq_size": 0, 00:05:39.886 "io_path_stat": false, 00:05:39.886 "allow_accel_sequence": false, 00:05:39.886 "rdma_max_cq_size": 0, 00:05:39.886 "rdma_cm_event_timeout_ms": 0, 00:05:39.886 "dhchap_digests": [ 00:05:39.886 "sha256", 00:05:39.886 "sha384", 00:05:39.886 "sha512" 00:05:39.886 ], 00:05:39.886 "dhchap_dhgroups": [ 00:05:39.886 "null", 00:05:39.886 "ffdhe2048", 00:05:39.886 "ffdhe3072", 00:05:39.886 "ffdhe4096", 00:05:39.886 "ffdhe6144", 00:05:39.886 "ffdhe8192" 00:05:39.886 ] 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "bdev_nvme_set_hotplug", 00:05:39.886 "params": { 00:05:39.886 "period_us": 100000, 00:05:39.886 "enable": false 00:05:39.886 } 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "method": "bdev_wait_for_examine" 00:05:39.886 } 00:05:39.886 ] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "scsi", 00:05:39.886 "config": null 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "scheduler", 00:05:39.886 "config": [ 00:05:39.886 { 00:05:39.886 "method": "framework_set_scheduler", 00:05:39.886 "params": { 00:05:39.886 "name": "static" 00:05:39.886 } 00:05:39.886 } 00:05:39.886 ] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "vhost_scsi", 00:05:39.886 "config": [] 00:05:39.886 }, 00:05:39.886 { 00:05:39.886 "subsystem": "vhost_blk", 00:05:39.886 "config": [] 00:05:39.886 }, 00:05:39.886 { 00:05:39.887 "subsystem": "ublk", 00:05:39.887 "config": [] 00:05:39.887 }, 00:05:39.887 { 00:05:39.887 "subsystem": "nbd", 00:05:39.887 "config": [] 00:05:39.887 }, 00:05:39.887 { 00:05:39.887 "subsystem": "nvmf", 00:05:39.887 "config": [ 00:05:39.887 { 00:05:39.887 "method": "nvmf_set_config", 00:05:39.887 "params": { 00:05:39.887 "discovery_filter": "match_any", 00:05:39.887 "admin_cmd_passthru": { 00:05:39.887 "identify_ctrlr": false 00:05:39.887 } 00:05:39.887 } 00:05:39.887 }, 00:05:39.887 { 00:05:39.887 "method": "nvmf_set_max_subsystems", 00:05:39.887 "params": { 00:05:39.887 "max_subsystems": 1024 00:05:39.887 } 00:05:39.887 }, 00:05:39.887 { 00:05:39.887 "method": "nvmf_set_crdt", 00:05:39.887 "params": { 00:05:39.887 "crdt1": 0, 00:05:39.887 "crdt2": 0, 00:05:39.887 "crdt3": 0 00:05:39.887 } 00:05:39.887 }, 00:05:39.887 { 00:05:39.887 "method": "nvmf_create_transport", 00:05:39.887 "params": { 00:05:39.887 "trtype": "TCP", 00:05:39.887 "max_queue_depth": 128, 00:05:39.887 "max_io_qpairs_per_ctrlr": 127, 00:05:39.887 "in_capsule_data_size": 4096, 00:05:39.887 "max_io_size": 131072, 00:05:39.887 "io_unit_size": 131072, 00:05:39.887 "max_aq_depth": 128, 00:05:39.887 "num_shared_buffers": 511, 00:05:39.887 "buf_cache_size": 4294967295, 00:05:39.887 "dif_insert_or_strip": false, 00:05:39.887 "zcopy": false, 00:05:39.887 "c2h_success": true, 00:05:39.887 "sock_priority": 0, 00:05:39.887 "abort_timeout_sec": 1, 00:05:39.887 "ack_timeout": 0, 00:05:39.887 "data_wr_pool_size": 0 00:05:39.887 } 00:05:39.887 } 00:05:39.887 ] 00:05:39.887 }, 00:05:39.887 { 00:05:39.887 "subsystem": "iscsi", 00:05:39.887 "config": [ 00:05:39.887 { 00:05:39.887 "method": "iscsi_set_options", 00:05:39.887 "params": { 00:05:39.887 "node_base": "iqn.2016-06.io.spdk", 00:05:39.887 "max_sessions": 128, 00:05:39.887 "max_connections_per_session": 2, 00:05:39.887 "max_queue_depth": 64, 00:05:39.887 "default_time2wait": 2, 00:05:39.887 "default_time2retain": 20, 00:05:39.887 "first_burst_length": 8192, 00:05:39.887 "immediate_data": true, 00:05:39.887 "allow_duplicated_isid": false, 00:05:39.887 "error_recovery_level": 0, 00:05:39.887 "nop_timeout": 60, 00:05:39.887 "nop_in_interval": 30, 00:05:39.887 "disable_chap": false, 00:05:39.887 "require_chap": false, 00:05:39.887 "mutual_chap": false, 00:05:39.887 "chap_group": 0, 00:05:39.887 "max_large_datain_per_connection": 64, 00:05:39.887 "max_r2t_per_connection": 4, 00:05:39.887 "pdu_pool_size": 36864, 00:05:39.887 "immediate_data_pool_size": 16384, 00:05:39.887 "data_out_pool_size": 2048 00:05:39.887 } 00:05:39.887 } 00:05:39.887 ] 00:05:39.887 } 00:05:39.887 ] 00:05:39.887 } 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62493 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62493 ']' 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62493 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62493 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.887 killing process with pid 62493 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62493' 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62493 00:05:39.887 19:27:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62493 00:05:43.172 19:27:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62560 00:05:43.172 19:27:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:43.172 19:27:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62560 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62560 ']' 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62560 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62560 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.441 killing process with pid 62560 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62560' 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62560 00:05:48.441 19:27:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62560 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:50.333 00:05:50.333 real 0m12.302s 00:05:50.333 user 0m11.647s 00:05:50.333 sys 0m0.997s 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.333 ************************************ 00:05:50.333 END TEST skip_rpc_with_json 00:05:50.333 ************************************ 00:05:50.333 19:27:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.333 19:27:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:50.333 19:27:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.333 19:27:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.333 19:27:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.333 ************************************ 00:05:50.333 START TEST skip_rpc_with_delay 00:05:50.333 ************************************ 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:50.333 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.589 [2024-07-15 19:27:41.223423] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:50.589 [2024-07-15 19:27:41.223611] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:50.589 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:50.589 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.589 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.589 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.589 00:05:50.589 real 0m0.214s 00:05:50.589 user 0m0.105s 00:05:50.589 sys 0m0.105s 00:05:50.589 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.589 19:27:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:50.589 ************************************ 00:05:50.589 END TEST skip_rpc_with_delay 00:05:50.589 ************************************ 00:05:50.589 19:27:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.589 19:27:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:50.589 19:27:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:50.589 19:27:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:50.589 19:27:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.589 19:27:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.589 19:27:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.589 ************************************ 00:05:50.589 START TEST exit_on_failed_rpc_init 00:05:50.589 ************************************ 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62693 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62693 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62693 ']' 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.589 19:27:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.887 [2024-07-15 19:27:41.457404] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:50.887 [2024-07-15 19:27:41.457538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62693 ] 00:05:50.887 [2024-07-15 19:27:41.624383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.178 [2024-07-15 19:27:41.874013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:52.548 19:27:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.548 [2024-07-15 19:27:43.118295] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:52.548 [2024-07-15 19:27:43.118490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62717 ] 00:05:52.548 [2024-07-15 19:27:43.297432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.112 [2024-07-15 19:27:43.627513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.112 [2024-07-15 19:27:43.627640] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:53.112 [2024-07-15 19:27:43.627667] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:53.112 [2024-07-15 19:27:43.627686] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62693 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62693 ']' 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62693 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.369 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62693 00:05:53.627 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.627 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.627 killing process with pid 62693 00:05:53.627 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62693' 00:05:53.627 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62693 00:05:53.627 19:27:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62693 00:05:56.960 00:05:56.960 real 0m5.725s 00:05:56.960 user 0m6.576s 00:05:56.960 sys 0m0.648s 00:05:56.960 ************************************ 00:05:56.960 END TEST exit_on_failed_rpc_init 00:05:56.960 ************************************ 00:05:56.960 19:27:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.960 19:27:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.960 19:27:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.960 19:27:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.960 00:05:56.960 real 0m26.298s 00:05:56.960 user 0m25.640s 00:05:56.960 sys 0m2.369s 00:05:56.960 19:27:47 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.960 19:27:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.960 ************************************ 00:05:56.960 END TEST skip_rpc 00:05:56.960 ************************************ 00:05:56.960 19:27:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.960 19:27:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.960 19:27:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.960 19:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.960 19:27:47 -- common/autotest_common.sh@10 -- # set +x 00:05:56.960 ************************************ 00:05:56.960 START TEST rpc_client 00:05:56.960 ************************************ 00:05:56.960 19:27:47 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.960 * Looking for test storage... 00:05:56.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:56.960 19:27:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:56.960 OK 00:05:56.960 19:27:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.960 00:05:56.960 real 0m0.142s 00:05:56.960 user 0m0.058s 00:05:56.960 sys 0m0.089s 00:05:56.960 19:27:47 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.960 19:27:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.960 ************************************ 00:05:56.960 END TEST rpc_client 00:05:56.960 ************************************ 00:05:56.960 19:27:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.960 19:27:47 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.961 19:27:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.961 19:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.961 19:27:47 -- common/autotest_common.sh@10 -- # set +x 00:05:56.961 ************************************ 00:05:56.961 START TEST json_config 00:05:56.961 ************************************ 00:05:56.961 19:27:47 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85d4478b-635a-462e-8237-2d2157ba9cca 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=85d4478b-635a-462e-8237-2d2157ba9cca 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.961 19:27:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.961 19:27:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.961 19:27:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.961 19:27:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.961 19:27:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@47 -- # : 0 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.961 19:27:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:56.961 WARNING: No tests are enabled so not running JSON configuration tests 00:05:56.961 19:27:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:56.961 00:05:56.961 real 0m0.080s 00:05:56.961 user 0m0.033s 00:05:56.961 sys 0m0.046s 00:05:56.961 19:27:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.961 19:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.961 ************************************ 00:05:56.961 END TEST json_config 00:05:56.961 ************************************ 00:05:56.961 19:27:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.961 19:27:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.961 19:27:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.961 19:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.961 19:27:47 -- common/autotest_common.sh@10 -- # set +x 00:05:56.961 ************************************ 00:05:56.961 START TEST json_config_extra_key 00:05:56.961 ************************************ 00:05:56.961 19:27:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85d4478b-635a-462e-8237-2d2157ba9cca 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=85d4478b-635a-462e-8237-2d2157ba9cca 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.961 19:27:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.961 19:27:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.961 19:27:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.961 19:27:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.961 19:27:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.961 19:27:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.961 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.961 INFO: launching applications... 00:05:56.962 19:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.962 Waiting for target to run... 00:05:56.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62913 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62913 /var/tmp/spdk_tgt.sock 00:05:56.962 19:27:47 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62913 ']' 00:05:56.962 19:27:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.962 19:27:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.962 19:27:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.962 19:27:47 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.962 19:27:47 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.962 19:27:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.962 [2024-07-15 19:27:47.699831] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:56.962 [2024-07-15 19:27:47.700011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62913 ] 00:05:57.525 [2024-07-15 19:27:48.090083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.782 [2024-07-15 19:27:48.345472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.714 00:05:58.714 INFO: shutting down applications... 00:05:58.714 19:27:49 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.714 19:27:49 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.714 19:27:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.714 19:27:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62913 ]] 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62913 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:05:58.714 19:27:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.971 19:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.971 19:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.971 19:27:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:05:58.971 19:27:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.536 19:27:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.536 19:27:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.536 19:27:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:05:59.536 19:27:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.101 19:27:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.101 19:27:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.101 19:27:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:06:00.101 19:27:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.667 19:27:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.667 19:27:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.667 19:27:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:06:00.667 19:27:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.232 19:27:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.232 19:27:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.232 19:27:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:06:01.232 19:27:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.489 19:27:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.489 19:27:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.489 19:27:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:06:01.489 19:27:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62913 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:02.055 SPDK target shutdown done 00:06:02.055 Success 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.055 19:27:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.055 19:27:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:02.055 00:06:02.055 real 0m5.270s 00:06:02.055 user 0m4.829s 00:06:02.055 sys 0m0.575s 00:06:02.055 19:27:52 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.055 19:27:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.055 ************************************ 00:06:02.055 END TEST json_config_extra_key 00:06:02.055 ************************************ 00:06:02.055 19:27:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.056 19:27:52 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.056 19:27:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.056 19:27:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.056 19:27:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.056 ************************************ 00:06:02.056 START TEST alias_rpc 00:06:02.056 ************************************ 00:06:02.056 19:27:52 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.313 * Looking for test storage... 00:06:02.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.313 19:27:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.313 19:27:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63024 00:06:02.313 19:27:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63024 00:06:02.313 19:27:52 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 63024 ']' 00:06:02.313 19:27:52 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.313 19:27:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.313 19:27:52 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.313 19:27:52 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.313 19:27:52 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.313 19:27:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.313 [2024-07-15 19:27:53.068060] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:02.313 [2024-07-15 19:27:53.068237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63024 ] 00:06:02.570 [2024-07-15 19:27:53.251555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.828 [2024-07-15 19:27:53.516089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.760 19:27:54 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.018 19:27:54 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.018 19:27:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:04.275 19:27:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63024 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 63024 ']' 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 63024 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63024 00:06:04.275 killing process with pid 63024 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63024' 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@967 -- # kill 63024 00:06:04.275 19:27:54 alias_rpc -- common/autotest_common.sh@972 -- # wait 63024 00:06:07.582 ************************************ 00:06:07.582 END TEST alias_rpc 00:06:07.582 ************************************ 00:06:07.582 00:06:07.582 real 0m4.960s 00:06:07.582 user 0m4.978s 00:06:07.582 sys 0m0.601s 00:06:07.582 19:27:57 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.582 19:27:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 19:27:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.582 19:27:57 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:07.582 19:27:57 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:07.582 19:27:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.582 19:27:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.582 19:27:57 -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 ************************************ 00:06:07.582 START TEST spdkcli_tcp 00:06:07.582 ************************************ 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:07.582 * Looking for test storage... 00:06:07.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63129 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63129 00:06:07.582 19:27:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63129 ']' 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.582 19:27:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 [2024-07-15 19:27:58.058530] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:07.582 [2024-07-15 19:27:58.058749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63129 ] 00:06:07.582 [2024-07-15 19:27:58.256407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.840 [2024-07-15 19:27:58.572560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.840 [2024-07-15 19:27:58.572592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.213 19:27:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.213 19:27:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:09.213 19:27:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63151 00:06:09.213 19:27:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.213 19:27:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:09.213 [ 00:06:09.213 "bdev_malloc_delete", 00:06:09.213 "bdev_malloc_create", 00:06:09.213 "bdev_null_resize", 00:06:09.213 "bdev_null_delete", 00:06:09.213 "bdev_null_create", 00:06:09.213 "bdev_nvme_cuse_unregister", 00:06:09.213 "bdev_nvme_cuse_register", 00:06:09.213 "bdev_opal_new_user", 00:06:09.213 "bdev_opal_set_lock_state", 00:06:09.213 "bdev_opal_delete", 00:06:09.213 "bdev_opal_get_info", 00:06:09.213 "bdev_opal_create", 00:06:09.213 "bdev_nvme_opal_revert", 00:06:09.213 "bdev_nvme_opal_init", 00:06:09.213 "bdev_nvme_send_cmd", 00:06:09.213 "bdev_nvme_get_path_iostat", 00:06:09.213 "bdev_nvme_get_mdns_discovery_info", 00:06:09.213 "bdev_nvme_stop_mdns_discovery", 00:06:09.213 "bdev_nvme_start_mdns_discovery", 00:06:09.213 "bdev_nvme_set_multipath_policy", 00:06:09.213 "bdev_nvme_set_preferred_path", 00:06:09.213 "bdev_nvme_get_io_paths", 00:06:09.213 "bdev_nvme_remove_error_injection", 00:06:09.213 "bdev_nvme_add_error_injection", 00:06:09.213 "bdev_nvme_get_discovery_info", 00:06:09.213 "bdev_nvme_stop_discovery", 00:06:09.213 "bdev_nvme_start_discovery", 00:06:09.213 "bdev_nvme_get_controller_health_info", 00:06:09.213 "bdev_nvme_disable_controller", 00:06:09.213 "bdev_nvme_enable_controller", 00:06:09.213 "bdev_nvme_reset_controller", 00:06:09.213 "bdev_nvme_get_transport_statistics", 00:06:09.213 "bdev_nvme_apply_firmware", 00:06:09.213 "bdev_nvme_detach_controller", 00:06:09.213 "bdev_nvme_get_controllers", 00:06:09.213 "bdev_nvme_attach_controller", 00:06:09.213 "bdev_nvme_set_hotplug", 00:06:09.213 "bdev_nvme_set_options", 00:06:09.213 "bdev_passthru_delete", 00:06:09.213 "bdev_passthru_create", 00:06:09.213 "bdev_lvol_set_parent_bdev", 00:06:09.213 "bdev_lvol_set_parent", 00:06:09.213 "bdev_lvol_check_shallow_copy", 00:06:09.213 "bdev_lvol_start_shallow_copy", 00:06:09.213 "bdev_lvol_grow_lvstore", 00:06:09.213 "bdev_lvol_get_lvols", 00:06:09.213 "bdev_lvol_get_lvstores", 00:06:09.213 "bdev_lvol_delete", 00:06:09.213 "bdev_lvol_set_read_only", 00:06:09.213 "bdev_lvol_resize", 00:06:09.213 "bdev_lvol_decouple_parent", 00:06:09.213 "bdev_lvol_inflate", 00:06:09.213 "bdev_lvol_rename", 00:06:09.213 "bdev_lvol_clone_bdev", 00:06:09.213 "bdev_lvol_clone", 00:06:09.213 "bdev_lvol_snapshot", 00:06:09.213 "bdev_lvol_create", 00:06:09.213 "bdev_lvol_delete_lvstore", 00:06:09.213 "bdev_lvol_rename_lvstore", 00:06:09.213 "bdev_lvol_create_lvstore", 00:06:09.213 "bdev_raid_set_options", 00:06:09.213 "bdev_raid_remove_base_bdev", 00:06:09.213 "bdev_raid_add_base_bdev", 00:06:09.213 "bdev_raid_delete", 00:06:09.213 "bdev_raid_create", 00:06:09.213 "bdev_raid_get_bdevs", 00:06:09.213 "bdev_error_inject_error", 00:06:09.213 "bdev_error_delete", 00:06:09.213 "bdev_error_create", 00:06:09.213 "bdev_split_delete", 00:06:09.213 "bdev_split_create", 00:06:09.213 "bdev_delay_delete", 00:06:09.213 "bdev_delay_create", 00:06:09.213 "bdev_delay_update_latency", 00:06:09.213 "bdev_zone_block_delete", 00:06:09.213 "bdev_zone_block_create", 00:06:09.213 "blobfs_create", 00:06:09.213 "blobfs_detect", 00:06:09.213 "blobfs_set_cache_size", 00:06:09.213 "bdev_xnvme_delete", 00:06:09.213 "bdev_xnvme_create", 00:06:09.213 "bdev_aio_delete", 00:06:09.213 "bdev_aio_rescan", 00:06:09.213 "bdev_aio_create", 00:06:09.213 "bdev_ftl_set_property", 00:06:09.213 "bdev_ftl_get_properties", 00:06:09.213 "bdev_ftl_get_stats", 00:06:09.213 "bdev_ftl_unmap", 00:06:09.213 "bdev_ftl_unload", 00:06:09.213 "bdev_ftl_delete", 00:06:09.213 "bdev_ftl_load", 00:06:09.213 "bdev_ftl_create", 00:06:09.213 "bdev_virtio_attach_controller", 00:06:09.213 "bdev_virtio_scsi_get_devices", 00:06:09.213 "bdev_virtio_detach_controller", 00:06:09.213 "bdev_virtio_blk_set_hotplug", 00:06:09.213 "bdev_iscsi_delete", 00:06:09.213 "bdev_iscsi_create", 00:06:09.213 "bdev_iscsi_set_options", 00:06:09.213 "accel_error_inject_error", 00:06:09.213 "ioat_scan_accel_module", 00:06:09.213 "dsa_scan_accel_module", 00:06:09.213 "iaa_scan_accel_module", 00:06:09.213 "keyring_file_remove_key", 00:06:09.213 "keyring_file_add_key", 00:06:09.213 "keyring_linux_set_options", 00:06:09.213 "iscsi_get_histogram", 00:06:09.213 "iscsi_enable_histogram", 00:06:09.213 "iscsi_set_options", 00:06:09.213 "iscsi_get_auth_groups", 00:06:09.214 "iscsi_auth_group_remove_secret", 00:06:09.214 "iscsi_auth_group_add_secret", 00:06:09.214 "iscsi_delete_auth_group", 00:06:09.214 "iscsi_create_auth_group", 00:06:09.214 "iscsi_set_discovery_auth", 00:06:09.214 "iscsi_get_options", 00:06:09.214 "iscsi_target_node_request_logout", 00:06:09.214 "iscsi_target_node_set_redirect", 00:06:09.214 "iscsi_target_node_set_auth", 00:06:09.214 "iscsi_target_node_add_lun", 00:06:09.214 "iscsi_get_stats", 00:06:09.214 "iscsi_get_connections", 00:06:09.214 "iscsi_portal_group_set_auth", 00:06:09.214 "iscsi_start_portal_group", 00:06:09.214 "iscsi_delete_portal_group", 00:06:09.214 "iscsi_create_portal_group", 00:06:09.214 "iscsi_get_portal_groups", 00:06:09.214 "iscsi_delete_target_node", 00:06:09.214 "iscsi_target_node_remove_pg_ig_maps", 00:06:09.214 "iscsi_target_node_add_pg_ig_maps", 00:06:09.214 "iscsi_create_target_node", 00:06:09.214 "iscsi_get_target_nodes", 00:06:09.214 "iscsi_delete_initiator_group", 00:06:09.214 "iscsi_initiator_group_remove_initiators", 00:06:09.214 "iscsi_initiator_group_add_initiators", 00:06:09.214 "iscsi_create_initiator_group", 00:06:09.214 "iscsi_get_initiator_groups", 00:06:09.214 "nvmf_set_crdt", 00:06:09.214 "nvmf_set_config", 00:06:09.214 "nvmf_set_max_subsystems", 00:06:09.214 "nvmf_stop_mdns_prr", 00:06:09.214 "nvmf_publish_mdns_prr", 00:06:09.214 "nvmf_subsystem_get_listeners", 00:06:09.214 "nvmf_subsystem_get_qpairs", 00:06:09.214 "nvmf_subsystem_get_controllers", 00:06:09.214 "nvmf_get_stats", 00:06:09.214 "nvmf_get_transports", 00:06:09.214 "nvmf_create_transport", 00:06:09.214 "nvmf_get_targets", 00:06:09.214 "nvmf_delete_target", 00:06:09.214 "nvmf_create_target", 00:06:09.214 "nvmf_subsystem_allow_any_host", 00:06:09.214 "nvmf_subsystem_remove_host", 00:06:09.214 "nvmf_subsystem_add_host", 00:06:09.214 "nvmf_ns_remove_host", 00:06:09.214 "nvmf_ns_add_host", 00:06:09.214 "nvmf_subsystem_remove_ns", 00:06:09.214 "nvmf_subsystem_add_ns", 00:06:09.214 "nvmf_subsystem_listener_set_ana_state", 00:06:09.214 "nvmf_discovery_get_referrals", 00:06:09.214 "nvmf_discovery_remove_referral", 00:06:09.214 "nvmf_discovery_add_referral", 00:06:09.214 "nvmf_subsystem_remove_listener", 00:06:09.214 "nvmf_subsystem_add_listener", 00:06:09.214 "nvmf_delete_subsystem", 00:06:09.214 "nvmf_create_subsystem", 00:06:09.214 "nvmf_get_subsystems", 00:06:09.214 "env_dpdk_get_mem_stats", 00:06:09.214 "nbd_get_disks", 00:06:09.214 "nbd_stop_disk", 00:06:09.214 "nbd_start_disk", 00:06:09.214 "ublk_recover_disk", 00:06:09.214 "ublk_get_disks", 00:06:09.214 "ublk_stop_disk", 00:06:09.214 "ublk_start_disk", 00:06:09.214 "ublk_destroy_target", 00:06:09.214 "ublk_create_target", 00:06:09.214 "virtio_blk_create_transport", 00:06:09.214 "virtio_blk_get_transports", 00:06:09.214 "vhost_controller_set_coalescing", 00:06:09.214 "vhost_get_controllers", 00:06:09.214 "vhost_delete_controller", 00:06:09.214 "vhost_create_blk_controller", 00:06:09.214 "vhost_scsi_controller_remove_target", 00:06:09.214 "vhost_scsi_controller_add_target", 00:06:09.214 "vhost_start_scsi_controller", 00:06:09.214 "vhost_create_scsi_controller", 00:06:09.214 "thread_set_cpumask", 00:06:09.214 "framework_get_governor", 00:06:09.214 "framework_get_scheduler", 00:06:09.214 "framework_set_scheduler", 00:06:09.214 "framework_get_reactors", 00:06:09.214 "thread_get_io_channels", 00:06:09.214 "thread_get_pollers", 00:06:09.214 "thread_get_stats", 00:06:09.214 "framework_monitor_context_switch", 00:06:09.214 "spdk_kill_instance", 00:06:09.214 "log_enable_timestamps", 00:06:09.214 "log_get_flags", 00:06:09.214 "log_clear_flag", 00:06:09.214 "log_set_flag", 00:06:09.214 "log_get_level", 00:06:09.214 "log_set_level", 00:06:09.214 "log_get_print_level", 00:06:09.214 "log_set_print_level", 00:06:09.214 "framework_enable_cpumask_locks", 00:06:09.214 "framework_disable_cpumask_locks", 00:06:09.214 "framework_wait_init", 00:06:09.214 "framework_start_init", 00:06:09.214 "scsi_get_devices", 00:06:09.214 "bdev_get_histogram", 00:06:09.214 "bdev_enable_histogram", 00:06:09.214 "bdev_set_qos_limit", 00:06:09.214 "bdev_set_qd_sampling_period", 00:06:09.214 "bdev_get_bdevs", 00:06:09.214 "bdev_reset_iostat", 00:06:09.214 "bdev_get_iostat", 00:06:09.214 "bdev_examine", 00:06:09.214 "bdev_wait_for_examine", 00:06:09.214 "bdev_set_options", 00:06:09.214 "notify_get_notifications", 00:06:09.214 "notify_get_types", 00:06:09.214 "accel_get_stats", 00:06:09.214 "accel_set_options", 00:06:09.214 "accel_set_driver", 00:06:09.214 "accel_crypto_key_destroy", 00:06:09.214 "accel_crypto_keys_get", 00:06:09.214 "accel_crypto_key_create", 00:06:09.214 "accel_assign_opc", 00:06:09.214 "accel_get_module_info", 00:06:09.214 "accel_get_opc_assignments", 00:06:09.214 "vmd_rescan", 00:06:09.214 "vmd_remove_device", 00:06:09.214 "vmd_enable", 00:06:09.214 "sock_get_default_impl", 00:06:09.214 "sock_set_default_impl", 00:06:09.214 "sock_impl_set_options", 00:06:09.214 "sock_impl_get_options", 00:06:09.214 "iobuf_get_stats", 00:06:09.214 "iobuf_set_options", 00:06:09.214 "framework_get_pci_devices", 00:06:09.214 "framework_get_config", 00:06:09.214 "framework_get_subsystems", 00:06:09.214 "trace_get_info", 00:06:09.214 "trace_get_tpoint_group_mask", 00:06:09.214 "trace_disable_tpoint_group", 00:06:09.214 "trace_enable_tpoint_group", 00:06:09.214 "trace_clear_tpoint_mask", 00:06:09.214 "trace_set_tpoint_mask", 00:06:09.214 "keyring_get_keys", 00:06:09.214 "spdk_get_version", 00:06:09.214 "rpc_get_methods" 00:06:09.214 ] 00:06:09.214 19:27:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.214 19:27:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:09.214 19:27:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63129 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63129 ']' 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63129 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63129 00:06:09.214 killing process with pid 63129 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63129' 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63129 00:06:09.214 19:27:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63129 00:06:12.491 ************************************ 00:06:12.491 END TEST spdkcli_tcp 00:06:12.491 ************************************ 00:06:12.491 00:06:12.491 real 0m4.889s 00:06:12.491 user 0m8.559s 00:06:12.491 sys 0m0.629s 00:06:12.491 19:28:02 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.491 19:28:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.491 19:28:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.491 19:28:02 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.491 19:28:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.491 19:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.491 19:28:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.491 ************************************ 00:06:12.491 START TEST dpdk_mem_utility 00:06:12.491 ************************************ 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.491 * Looking for test storage... 00:06:12.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:12.491 19:28:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:12.491 19:28:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63250 00:06:12.491 19:28:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.491 19:28:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63250 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63250 ']' 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.491 19:28:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.491 [2024-07-15 19:28:03.011267] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:12.491 [2024-07-15 19:28:03.011701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:06:12.491 [2024-07-15 19:28:03.198729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.747 [2024-07-15 19:28:03.497284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.120 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.120 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:14.120 19:28:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.120 19:28:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.120 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.120 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.120 { 00:06:14.120 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.120 } 00:06:14.120 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.120 19:28:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:14.120 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:14.120 1 heaps totaling size 820.000000 MiB 00:06:14.120 size: 820.000000 MiB heap id: 0 00:06:14.120 end heaps---------- 00:06:14.120 8 mempools totaling size 598.116089 MiB 00:06:14.120 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.120 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.120 size: 84.521057 MiB name: bdev_io_63250 00:06:14.120 size: 51.011292 MiB name: evtpool_63250 00:06:14.120 size: 50.003479 MiB name: msgpool_63250 00:06:14.120 size: 21.763794 MiB name: PDU_Pool 00:06:14.120 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.120 size: 0.026123 MiB name: Session_Pool 00:06:14.120 end mempools------- 00:06:14.120 6 memzones totaling size 4.142822 MiB 00:06:14.120 size: 1.000366 MiB name: RG_ring_0_63250 00:06:14.120 size: 1.000366 MiB name: RG_ring_1_63250 00:06:14.120 size: 1.000366 MiB name: RG_ring_4_63250 00:06:14.120 size: 1.000366 MiB name: RG_ring_5_63250 00:06:14.120 size: 0.125366 MiB name: RG_ring_2_63250 00:06:14.120 size: 0.015991 MiB name: RG_ring_3_63250 00:06:14.120 end memzones------- 00:06:14.120 19:28:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.120 heap id: 0 total size: 820.000000 MiB number of busy elements: 299 number of free elements: 18 00:06:14.120 list of free elements. size: 18.451782 MiB 00:06:14.120 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:14.120 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:14.120 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:14.120 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:14.120 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:14.120 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:14.120 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:14.120 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:14.120 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:14.120 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:14.120 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:14.120 element at address: 0x200000200000 with size: 0.829956 MiB 00:06:14.120 element at address: 0x20001b000000 with size: 0.564392 MiB 00:06:14.120 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:14.120 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:14.120 element at address: 0x200013800000 with size: 0.467896 MiB 00:06:14.120 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:14.120 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:14.120 list of standard malloc elements. size: 199.283813 MiB 00:06:14.120 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:14.120 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:14.120 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:14.120 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:14.120 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:14.120 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:14.120 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:14.120 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:14.120 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:14.120 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:14.120 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:14.120 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:14.120 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:14.120 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:14.121 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:14.121 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:14.122 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:14.122 list of memzone associated elements. size: 602.264404 MiB 00:06:14.122 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:14.122 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.122 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:14.122 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.122 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:14.122 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63250_0 00:06:14.122 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:14.122 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63250_0 00:06:14.122 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:14.122 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63250_0 00:06:14.122 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:14.122 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.122 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:14.122 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.122 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:14.122 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63250 00:06:14.122 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:14.122 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63250 00:06:14.122 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:14.122 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63250 00:06:14.122 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:14.122 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.122 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:14.122 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.122 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:14.122 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.122 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:14.122 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.122 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:14.122 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63250 00:06:14.122 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:14.122 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63250 00:06:14.122 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:14.122 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63250 00:06:14.122 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:14.122 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63250 00:06:14.122 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:14.122 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63250 00:06:14.122 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:14.122 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.122 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:14.122 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.122 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:14.122 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.122 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:14.122 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63250 00:06:14.122 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:14.122 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.122 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:14.122 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.122 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:14.122 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63250 00:06:14.122 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:14.122 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.122 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:14.122 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63250 00:06:14.122 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:14.122 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63250 00:06:14.122 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:14.122 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.122 19:28:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.122 19:28:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63250 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63250 ']' 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63250 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63250 00:06:14.122 killing process with pid 63250 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63250' 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63250 00:06:14.122 19:28:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63250 00:06:16.673 ************************************ 00:06:16.673 END TEST dpdk_mem_utility 00:06:16.673 ************************************ 00:06:16.673 00:06:16.673 real 0m4.618s 00:06:16.673 user 0m4.525s 00:06:16.673 sys 0m0.591s 00:06:16.673 19:28:07 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.673 19:28:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.673 19:28:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.673 19:28:07 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.673 19:28:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.673 19:28:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.932 19:28:07 -- common/autotest_common.sh@10 -- # set +x 00:06:16.932 ************************************ 00:06:16.932 START TEST event 00:06:16.932 ************************************ 00:06:16.932 19:28:07 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.932 * Looking for test storage... 00:06:16.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.932 19:28:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:16.932 19:28:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.932 19:28:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.932 19:28:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:16.932 19:28:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.932 19:28:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.932 ************************************ 00:06:16.932 START TEST event_perf 00:06:16.932 ************************************ 00:06:16.932 19:28:07 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.932 Running I/O for 1 seconds...[2024-07-15 19:28:07.619505] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:16.932 [2024-07-15 19:28:07.619764] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63361 ] 00:06:17.190 [2024-07-15 19:28:07.786008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.447 [2024-07-15 19:28:08.035166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.447 [2024-07-15 19:28:08.035460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.447 [2024-07-15 19:28:08.035554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.447 [2024-07-15 19:28:08.035580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.820 Running I/O for 1 seconds... 00:06:18.820 lcore 0: 179652 00:06:18.820 lcore 1: 179652 00:06:18.820 lcore 2: 179654 00:06:18.820 lcore 3: 179652 00:06:18.820 done. 00:06:18.820 00:06:18.820 real 0m1.917s 00:06:18.820 ************************************ 00:06:18.820 END TEST event_perf 00:06:18.820 ************************************ 00:06:18.820 user 0m4.658s 00:06:18.820 sys 0m0.136s 00:06:18.820 19:28:09 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.820 19:28:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.820 19:28:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.820 19:28:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.820 19:28:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:18.820 19:28:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.820 19:28:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.820 ************************************ 00:06:18.820 START TEST event_reactor 00:06:18.820 ************************************ 00:06:18.820 19:28:09 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.820 [2024-07-15 19:28:09.595649] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:18.820 [2024-07-15 19:28:09.595802] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63401 ] 00:06:19.078 [2024-07-15 19:28:09.762310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.336 [2024-07-15 19:28:10.005868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.709 test_start 00:06:20.709 oneshot 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 250 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 250 00:06:20.709 tick 100 00:06:20.709 tick 500 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 250 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 test_end 00:06:20.709 ************************************ 00:06:20.709 END TEST event_reactor 00:06:20.709 ************************************ 00:06:20.709 00:06:20.709 real 0m1.894s 00:06:20.709 user 0m1.670s 00:06:20.709 sys 0m0.114s 00:06:20.709 19:28:11 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.709 19:28:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:20.967 19:28:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.967 19:28:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.967 19:28:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.967 19:28:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.967 19:28:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.967 ************************************ 00:06:20.967 START TEST event_reactor_perf 00:06:20.967 ************************************ 00:06:20.967 19:28:11 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.967 [2024-07-15 19:28:11.561486] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:20.967 [2024-07-15 19:28:11.561666] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63443 ] 00:06:20.967 [2024-07-15 19:28:11.741148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.225 [2024-07-15 19:28:11.979376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.128 test_start 00:06:23.128 test_end 00:06:23.128 Performance: 343386 events per second 00:06:23.128 00:06:23.128 real 0m1.924s 00:06:23.128 user 0m1.691s 00:06:23.128 sys 0m0.122s 00:06:23.128 19:28:13 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.128 19:28:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.128 ************************************ 00:06:23.128 END TEST event_reactor_perf 00:06:23.128 ************************************ 00:06:23.128 19:28:13 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.128 19:28:13 event -- event/event.sh@49 -- # uname -s 00:06:23.128 19:28:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:23.128 19:28:13 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:23.128 19:28:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.128 19:28:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.128 19:28:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.128 ************************************ 00:06:23.128 START TEST event_scheduler 00:06:23.128 ************************************ 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:23.128 * Looking for test storage... 00:06:23.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:23.128 19:28:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:23.128 19:28:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63513 00:06:23.128 19:28:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.128 19:28:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:23.128 19:28:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63513 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63513 ']' 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.128 19:28:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.128 [2024-07-15 19:28:13.721017] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:23.128 [2024-07-15 19:28:13.721219] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63513 ] 00:06:23.128 [2024-07-15 19:28:13.911503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.695 [2024-07-15 19:28:14.217218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.695 [2024-07-15 19:28:14.217412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.695 [2024-07-15 19:28:14.217454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.695 [2024-07-15 19:28:14.217634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:23.954 19:28:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.954 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.954 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.954 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.954 POWER: Cannot set governor of lcore 0 to performance 00:06:23.954 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.954 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.954 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.954 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.954 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:23.954 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:23.954 POWER: Unable to set Power Management Environment for lcore 0 00:06:23.954 [2024-07-15 19:28:14.595836] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:23.954 [2024-07-15 19:28:14.595859] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:23.954 [2024-07-15 19:28:14.595876] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.954 [2024-07-15 19:28:14.595898] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.954 [2024-07-15 19:28:14.595913] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.954 [2024-07-15 19:28:14.595924] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.954 19:28:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.954 19:28:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.211 [2024-07-15 19:28:14.977848] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:24.211 19:28:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.211 19:28:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:24.211 19:28:14 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.211 19:28:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.211 19:28:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.211 ************************************ 00:06:24.211 START TEST scheduler_create_thread 00:06:24.211 ************************************ 00:06:24.211 19:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:24.211 19:28:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:24.211 19:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.211 19:28:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 2 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 3 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 4 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 5 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 6 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 7 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 8 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 9 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 10 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.471 19:28:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.404 19:28:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.404 19:28:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.404 19:28:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.404 19:28:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.404 19:28:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.780 ************************************ 00:06:26.780 END TEST scheduler_create_thread 00:06:26.780 ************************************ 00:06:26.780 19:28:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.780 00:06:26.780 real 0m2.141s 00:06:26.780 user 0m0.019s 00:06:26.780 sys 0m0.008s 00:06:26.780 19:28:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.780 19:28:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:26.780 19:28:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.780 19:28:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63513 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63513 ']' 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63513 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63513 00:06:26.780 killing process with pid 63513 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63513' 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63513 00:06:26.780 19:28:17 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63513 00:06:27.038 [2024-07-15 19:28:17.612972] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.515 00:06:28.515 real 0m5.596s 00:06:28.515 user 0m8.647s 00:06:28.515 sys 0m0.539s 00:06:28.515 ************************************ 00:06:28.515 END TEST event_scheduler 00:06:28.515 ************************************ 00:06:28.515 19:28:19 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.515 19:28:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.516 19:28:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.516 19:28:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.516 19:28:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.516 19:28:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.516 19:28:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.516 19:28:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.516 ************************************ 00:06:28.516 START TEST app_repeat 00:06:28.516 ************************************ 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.516 Process app_repeat pid: 63624 00:06:28.516 spdk_app_start Round 0 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63624 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63624' 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.516 19:28:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63624 /var/tmp/spdk-nbd.sock 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63624 ']' 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.516 19:28:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.516 [2024-07-15 19:28:19.239603] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:28.516 [2024-07-15 19:28:19.240628] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63624 ] 00:06:28.775 [2024-07-15 19:28:19.426503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.033 [2024-07-15 19:28:19.668982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.033 [2024-07-15 19:28:19.669014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.599 19:28:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.599 19:28:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.599 19:28:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.857 Malloc0 00:06:29.857 19:28:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.115 Malloc1 00:06:30.115 19:28:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.115 19:28:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.374 /dev/nbd0 00:06:30.374 19:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.374 19:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.374 1+0 records in 00:06:30.374 1+0 records out 00:06:30.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331672 s, 12.3 MB/s 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.374 19:28:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.374 19:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.374 19:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.374 19:28:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.632 /dev/nbd1 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.632 1+0 records in 00:06:30.632 1+0 records out 00:06:30.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549624 s, 7.5 MB/s 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.632 19:28:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.632 19:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.890 { 00:06:30.890 "nbd_device": "/dev/nbd0", 00:06:30.890 "bdev_name": "Malloc0" 00:06:30.890 }, 00:06:30.890 { 00:06:30.890 "nbd_device": "/dev/nbd1", 00:06:30.890 "bdev_name": "Malloc1" 00:06:30.890 } 00:06:30.890 ]' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.890 { 00:06:30.890 "nbd_device": "/dev/nbd0", 00:06:30.890 "bdev_name": "Malloc0" 00:06:30.890 }, 00:06:30.890 { 00:06:30.890 "nbd_device": "/dev/nbd1", 00:06:30.890 "bdev_name": "Malloc1" 00:06:30.890 } 00:06:30.890 ]' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.890 /dev/nbd1' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.890 /dev/nbd1' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.890 256+0 records in 00:06:30.890 256+0 records out 00:06:30.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104693 s, 100 MB/s 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.890 19:28:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.148 256+0 records in 00:06:31.148 256+0 records out 00:06:31.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299569 s, 35.0 MB/s 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.148 256+0 records in 00:06:31.148 256+0 records out 00:06:31.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0385546 s, 27.2 MB/s 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.148 19:28:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.405 19:28:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.405 19:28:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.405 19:28:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.740 19:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.006 19:28:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.006 19:28:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.577 19:28:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.479 [2024-07-15 19:28:24.748096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.479 [2024-07-15 19:28:25.018152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.479 [2024-07-15 19:28:25.018156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.738 [2024-07-15 19:28:25.289170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.738 [2024-07-15 19:28:25.289273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.771 spdk_app_start Round 1 00:06:35.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.771 19:28:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.771 19:28:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:35.771 19:28:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63624 /var/tmp/spdk-nbd.sock 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63624 ']' 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.771 19:28:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.771 19:28:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.030 Malloc0 00:06:36.030 19:28:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.597 Malloc1 00:06:36.597 19:28:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.597 19:28:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.856 /dev/nbd0 00:06:36.856 19:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.856 19:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.856 1+0 records in 00:06:36.856 1+0 records out 00:06:36.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587709 s, 7.0 MB/s 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.856 19:28:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.856 19:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.856 19:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.856 19:28:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.115 /dev/nbd1 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.115 1+0 records in 00:06:37.115 1+0 records out 00:06:37.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445098 s, 9.2 MB/s 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.115 19:28:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.115 19:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.374 { 00:06:37.374 "nbd_device": "/dev/nbd0", 00:06:37.374 "bdev_name": "Malloc0" 00:06:37.374 }, 00:06:37.374 { 00:06:37.374 "nbd_device": "/dev/nbd1", 00:06:37.374 "bdev_name": "Malloc1" 00:06:37.374 } 00:06:37.374 ]' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.374 { 00:06:37.374 "nbd_device": "/dev/nbd0", 00:06:37.374 "bdev_name": "Malloc0" 00:06:37.374 }, 00:06:37.374 { 00:06:37.374 "nbd_device": "/dev/nbd1", 00:06:37.374 "bdev_name": "Malloc1" 00:06:37.374 } 00:06:37.374 ]' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.374 /dev/nbd1' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.374 /dev/nbd1' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.374 19:28:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.374 256+0 records in 00:06:37.374 256+0 records out 00:06:37.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669207 s, 157 MB/s 00:06:37.375 19:28:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.375 19:28:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.632 256+0 records in 00:06:37.632 256+0 records out 00:06:37.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344087 s, 30.5 MB/s 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.632 256+0 records in 00:06:37.632 256+0 records out 00:06:37.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360396 s, 29.1 MB/s 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.632 19:28:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.891 19:28:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.149 19:28:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.150 19:28:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.150 19:28:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.150 19:28:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.150 19:28:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.409 19:28:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.409 19:28:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.975 19:28:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.874 [2024-07-15 19:28:31.301078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.874 [2024-07-15 19:28:31.567001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.874 [2024-07-15 19:28:31.567004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.135 [2024-07-15 19:28:31.833198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.135 [2024-07-15 19:28:31.833301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.072 spdk_app_start Round 2 00:06:42.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.072 19:28:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.072 19:28:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:42.072 19:28:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63624 /var/tmp/spdk-nbd.sock 00:06:42.072 19:28:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63624 ']' 00:06:42.072 19:28:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.072 19:28:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.072 19:28:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.072 19:28:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.072 19:28:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.329 19:28:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.329 19:28:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:42.329 19:28:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.585 Malloc0 00:06:42.585 19:28:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.843 Malloc1 00:06:43.101 19:28:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.101 /dev/nbd0 00:06:43.101 19:28:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.359 19:28:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.359 1+0 records in 00:06:43.359 1+0 records out 00:06:43.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00150052 s, 2.7 MB/s 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.359 19:28:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.359 19:28:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.359 19:28:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.359 19:28:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.616 /dev/nbd1 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.616 1+0 records in 00:06:43.616 1+0 records out 00:06:43.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485703 s, 8.4 MB/s 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.616 19:28:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.616 19:28:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.874 { 00:06:43.874 "nbd_device": "/dev/nbd0", 00:06:43.874 "bdev_name": "Malloc0" 00:06:43.874 }, 00:06:43.874 { 00:06:43.874 "nbd_device": "/dev/nbd1", 00:06:43.874 "bdev_name": "Malloc1" 00:06:43.874 } 00:06:43.874 ]' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.874 { 00:06:43.874 "nbd_device": "/dev/nbd0", 00:06:43.874 "bdev_name": "Malloc0" 00:06:43.874 }, 00:06:43.874 { 00:06:43.874 "nbd_device": "/dev/nbd1", 00:06:43.874 "bdev_name": "Malloc1" 00:06:43.874 } 00:06:43.874 ]' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.874 /dev/nbd1' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.874 /dev/nbd1' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.874 256+0 records in 00:06:43.874 256+0 records out 00:06:43.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655472 s, 160 MB/s 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.874 256+0 records in 00:06:43.874 256+0 records out 00:06:43.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312701 s, 33.5 MB/s 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.874 19:28:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.131 256+0 records in 00:06:44.131 256+0 records out 00:06:44.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297961 s, 35.2 MB/s 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.131 19:28:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.389 19:28:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.389 19:28:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.648 19:28:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.907 19:28:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.907 19:28:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.473 19:28:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.366 [2024-07-15 19:28:37.734976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.366 [2024-07-15 19:28:37.997079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.366 [2024-07-15 19:28:37.997084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.623 [2024-07-15 19:28:38.269842] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.623 [2024-07-15 19:28:38.269969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.556 19:28:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63624 /var/tmp/spdk-nbd.sock 00:06:48.556 19:28:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63624 ']' 00:06:48.556 19:28:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.556 19:28:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.556 19:28:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.556 19:28:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.556 19:28:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.814 19:28:39 event.app_repeat -- event/event.sh@39 -- # killprocess 63624 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63624 ']' 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63624 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63624 00:06:48.814 killing process with pid 63624 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63624' 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63624 00:06:48.814 19:28:39 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63624 00:06:50.213 spdk_app_start is called in Round 0. 00:06:50.213 Shutdown signal received, stop current app iteration 00:06:50.213 Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 reinitialization... 00:06:50.213 spdk_app_start is called in Round 1. 00:06:50.213 Shutdown signal received, stop current app iteration 00:06:50.213 Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 reinitialization... 00:06:50.213 spdk_app_start is called in Round 2. 00:06:50.213 Shutdown signal received, stop current app iteration 00:06:50.213 Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 reinitialization... 00:06:50.213 spdk_app_start is called in Round 3. 00:06:50.213 Shutdown signal received, stop current app iteration 00:06:50.213 19:28:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.213 19:28:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:50.213 00:06:50.213 real 0m21.744s 00:06:50.213 user 0m45.392s 00:06:50.213 sys 0m3.496s 00:06:50.213 19:28:40 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.213 19:28:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.213 ************************************ 00:06:50.213 END TEST app_repeat 00:06:50.213 ************************************ 00:06:50.213 19:28:40 event -- common/autotest_common.sh@1142 -- # return 0 00:06:50.213 19:28:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.213 19:28:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.213 19:28:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.213 19:28:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.213 19:28:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.213 ************************************ 00:06:50.213 START TEST cpu_locks 00:06:50.213 ************************************ 00:06:50.213 19:28:40 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.472 * Looking for test storage... 00:06:50.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.472 19:28:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.472 19:28:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.472 19:28:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.472 19:28:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.472 19:28:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.472 19:28:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.472 19:28:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.472 ************************************ 00:06:50.472 START TEST default_locks 00:06:50.472 ************************************ 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64088 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64088 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64088 ']' 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.472 19:28:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.472 [2024-07-15 19:28:41.215333] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:50.472 [2024-07-15 19:28:41.216220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64088 ] 00:06:50.730 [2024-07-15 19:28:41.402598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.988 [2024-07-15 19:28:41.743681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.363 19:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.363 19:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:52.363 19:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64088 00:06:52.363 19:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.363 19:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64088 00:06:52.620 19:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64088 00:06:52.620 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64088 ']' 00:06:52.620 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64088 00:06:52.620 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:52.620 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.620 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64088 00:06:52.877 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.877 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.877 killing process with pid 64088 00:06:52.877 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64088' 00:06:52.877 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64088 00:06:52.877 19:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64088 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64088 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64088 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64088 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64088 ']' 00:06:56.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.164 ERROR: process (pid: 64088) is no longer running 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.164 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64088) - No such process 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.164 ************************************ 00:06:56.164 END TEST default_locks 00:06:56.164 ************************************ 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.164 00:06:56.164 real 0m5.195s 00:06:56.164 user 0m5.217s 00:06:56.164 sys 0m0.812s 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.164 19:28:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.164 19:28:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.164 19:28:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:56.164 19:28:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.164 19:28:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.164 19:28:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.164 ************************************ 00:06:56.164 START TEST default_locks_via_rpc 00:06:56.164 ************************************ 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:56.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64174 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64174 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64174 ']' 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.164 19:28:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.164 [2024-07-15 19:28:46.487318] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:56.164 [2024-07-15 19:28:46.487521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64174 ] 00:06:56.164 [2024-07-15 19:28:46.678202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.164 [2024-07-15 19:28:46.924119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.101 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.101 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.101 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:57.101 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.101 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64174 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64174 00:06:57.360 19:28:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.619 19:28:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64174 00:06:57.619 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64174 ']' 00:06:57.619 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64174 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64174 00:06:57.877 killing process with pid 64174 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64174' 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64174 00:06:57.877 19:28:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64174 00:07:00.431 ************************************ 00:07:00.431 END TEST default_locks_via_rpc 00:07:00.431 ************************************ 00:07:00.431 00:07:00.431 real 0m4.740s 00:07:00.431 user 0m4.838s 00:07:00.431 sys 0m0.776s 00:07:00.431 19:28:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.431 19:28:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.431 19:28:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:00.431 19:28:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:00.431 19:28:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.431 19:28:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.431 19:28:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.431 ************************************ 00:07:00.431 START TEST non_locking_app_on_locked_coremask 00:07:00.431 ************************************ 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64259 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64259 /var/tmp/spdk.sock 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64259 ']' 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.431 19:28:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.699 [2024-07-15 19:28:51.246570] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:00.699 [2024-07-15 19:28:51.246709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64259 ] 00:07:00.699 [2024-07-15 19:28:51.412527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.958 [2024-07-15 19:28:51.653156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64275 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64275 /var/tmp/spdk2.sock 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64275 ']' 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.895 19:28:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:02.154 [2024-07-15 19:28:52.783177] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:02.154 [2024-07-15 19:28:52.783356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64275 ] 00:07:02.412 [2024-07-15 19:28:52.958638] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.412 [2024-07-15 19:28:52.958706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.671 [2024-07-15 19:28:53.451292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.595 19:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.595 19:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:04.595 19:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64259 00:07:04.595 19:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.595 19:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64259 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64259 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64259 ']' 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64259 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64259 00:07:05.980 killing process with pid 64259 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64259' 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64259 00:07:05.980 19:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64259 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64275 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64275 ']' 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64275 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64275 00:07:11.263 killing process with pid 64275 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64275' 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64275 00:07:11.263 19:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64275 00:07:13.801 00:07:13.801 real 0m13.232s 00:07:13.801 user 0m13.719s 00:07:13.801 sys 0m1.588s 00:07:13.801 19:29:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.801 19:29:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.801 ************************************ 00:07:13.801 END TEST non_locking_app_on_locked_coremask 00:07:13.801 ************************************ 00:07:13.801 19:29:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.801 19:29:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:13.801 19:29:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.801 19:29:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.801 19:29:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.801 ************************************ 00:07:13.801 START TEST locking_app_on_unlocked_coremask 00:07:13.801 ************************************ 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64440 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64440 /var/tmp/spdk.sock 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64440 ']' 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.801 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.802 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.802 19:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.802 [2024-07-15 19:29:04.578040] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:13.802 [2024-07-15 19:29:04.578510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64440 ] 00:07:14.060 [2024-07-15 19:29:04.774404] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.060 [2024-07-15 19:29:04.774503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.329 [2024-07-15 19:29:05.045372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64461 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64461 /var/tmp/spdk2.sock 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64461 ']' 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.708 19:29:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.708 [2024-07-15 19:29:06.201428] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:15.708 [2024-07-15 19:29:06.201930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64461 ] 00:07:15.708 [2024-07-15 19:29:06.391630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.274 [2024-07-15 19:29:06.897924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.219 19:29:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.219 19:29:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.219 19:29:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64461 00:07:18.219 19:29:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64461 00:07:18.219 19:29:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64440 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64440 ']' 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64440 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64440 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.152 killing process with pid 64440 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64440' 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64440 00:07:19.152 19:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64440 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64461 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64461 ']' 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64461 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64461 00:07:25.754 killing process with pid 64461 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64461' 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64461 00:07:25.754 19:29:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64461 00:07:28.286 ************************************ 00:07:28.286 END TEST locking_app_on_unlocked_coremask 00:07:28.286 ************************************ 00:07:28.286 00:07:28.286 real 0m14.018s 00:07:28.286 user 0m14.430s 00:07:28.286 sys 0m1.596s 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.286 19:29:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:28.286 19:29:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:28.286 19:29:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.286 19:29:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.286 19:29:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.286 ************************************ 00:07:28.286 START TEST locking_app_on_locked_coremask 00:07:28.286 ************************************ 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64630 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64630 /var/tmp/spdk.sock 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64630 ']' 00:07:28.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.286 19:29:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.286 [2024-07-15 19:29:18.615391] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:28.286 [2024-07-15 19:29:18.616398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64630 ] 00:07:28.286 [2024-07-15 19:29:18.781516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.286 [2024-07-15 19:29:19.064206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64653 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64653 /var/tmp/spdk2.sock 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64653 /var/tmp/spdk2.sock 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64653 /var/tmp/spdk2.sock 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64653 ']' 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.689 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.689 [2024-07-15 19:29:20.236047] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:29.689 [2024-07-15 19:29:20.236227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64653 ] 00:07:29.689 [2024-07-15 19:29:20.426328] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64630 has claimed it. 00:07:29.689 [2024-07-15 19:29:20.426610] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:30.256 ERROR: process (pid: 64653) is no longer running 00:07:30.256 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64653) - No such process 00:07:30.256 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.256 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:30.256 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:30.257 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.257 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:30.257 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.257 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64630 00:07:30.257 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64630 00:07:30.257 19:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64630 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64630 ']' 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64630 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64630 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.823 killing process with pid 64630 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64630' 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64630 00:07:30.823 19:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64630 00:07:33.424 ************************************ 00:07:33.424 END TEST locking_app_on_locked_coremask 00:07:33.424 ************************************ 00:07:33.424 00:07:33.424 real 0m5.615s 00:07:33.424 user 0m5.974s 00:07:33.424 sys 0m1.007s 00:07:33.424 19:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.424 19:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.424 19:29:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:33.424 19:29:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:33.424 19:29:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.424 19:29:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.424 19:29:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.424 ************************************ 00:07:33.424 START TEST locking_overlapped_coremask 00:07:33.424 ************************************ 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64728 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64728 /var/tmp/spdk.sock 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64728 ']' 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.424 19:29:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.682 [2024-07-15 19:29:24.322403] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:33.682 [2024-07-15 19:29:24.322941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64728 ] 00:07:33.940 [2024-07-15 19:29:24.517370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.198 [2024-07-15 19:29:24.851513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.198 [2024-07-15 19:29:24.851617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.198 [2024-07-15 19:29:24.851628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64746 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64746 /var/tmp/spdk2.sock 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64746 /var/tmp/spdk2.sock 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:35.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64746 /var/tmp/spdk2.sock 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64746 ']' 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.131 19:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.390 [2024-07-15 19:29:25.953329] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:35.390 [2024-07-15 19:29:25.953483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64746 ] 00:07:35.390 [2024-07-15 19:29:26.134573] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64728 has claimed it. 00:07:35.390 [2024-07-15 19:29:26.138839] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.957 ERROR: process (pid: 64746) is no longer running 00:07:35.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64746) - No such process 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64728 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64728 ']' 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64728 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64728 00:07:35.957 killing process with pid 64728 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64728' 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64728 00:07:35.957 19:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64728 00:07:39.244 ************************************ 00:07:39.244 END TEST locking_overlapped_coremask 00:07:39.244 ************************************ 00:07:39.244 00:07:39.244 real 0m5.385s 00:07:39.244 user 0m13.838s 00:07:39.244 sys 0m0.715s 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.244 19:29:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:39.244 19:29:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:39.244 19:29:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.244 19:29:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.244 19:29:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.244 ************************************ 00:07:39.244 START TEST locking_overlapped_coremask_via_rpc 00:07:39.244 ************************************ 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64821 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64821 /var/tmp/spdk.sock 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64821 ']' 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.244 19:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.244 [2024-07-15 19:29:29.761027] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:39.244 [2024-07-15 19:29:29.761244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64821 ] 00:07:39.244 [2024-07-15 19:29:29.949674] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.244 [2024-07-15 19:29:29.949948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.502 [2024-07-15 19:29:30.229094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.502 [2024-07-15 19:29:30.229141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.502 [2024-07-15 19:29:30.229176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64850 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64850 /var/tmp/spdk2.sock 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64850 ']' 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.875 19:29:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.875 [2024-07-15 19:29:31.521663] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:40.875 [2024-07-15 19:29:31.521819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64850 ] 00:07:41.132 [2024-07-15 19:29:31.703923] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.132 [2024-07-15 19:29:31.704186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.697 [2024-07-15 19:29:32.244711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.697 [2024-07-15 19:29:32.244826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:41.697 [2024-07-15 19:29:32.244756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.605 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.606 [2024-07-15 19:29:34.236112] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64821 has claimed it. 00:07:43.606 request: 00:07:43.606 { 00:07:43.606 "method": "framework_enable_cpumask_locks", 00:07:43.606 "req_id": 1 00:07:43.606 } 00:07:43.606 Got JSON-RPC error response 00:07:43.606 response: 00:07:43.606 { 00:07:43.606 "code": -32603, 00:07:43.606 "message": "Failed to claim CPU core: 2" 00:07:43.606 } 00:07:43.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64821 /var/tmp/spdk.sock 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64821 ']' 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.606 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64850 /var/tmp/spdk2.sock 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64850 ']' 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.885 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.142 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.142 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:44.142 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:44.142 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.142 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.142 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.142 ************************************ 00:07:44.142 END TEST locking_overlapped_coremask_via_rpc 00:07:44.142 ************************************ 00:07:44.142 00:07:44.142 real 0m5.097s 00:07:44.143 user 0m1.436s 00:07:44.143 sys 0m0.260s 00:07:44.143 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.143 19:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:44.143 19:29:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:44.143 19:29:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64821 ]] 00:07:44.143 19:29:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64821 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64821 ']' 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64821 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64821 00:07:44.143 killing process with pid 64821 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64821' 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64821 00:07:44.143 19:29:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64821 00:07:47.423 19:29:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64850 ]] 00:07:47.423 19:29:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64850 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64850 ']' 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64850 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64850 00:07:47.423 killing process with pid 64850 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64850' 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64850 00:07:47.423 19:29:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64850 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:50.019 Process with pid 64821 is not found 00:07:50.019 Process with pid 64850 is not found 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64821 ]] 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64821 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64821 ']' 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64821 00:07:50.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64821) - No such process 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64821 is not found' 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64850 ]] 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64850 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64850 ']' 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64850 00:07:50.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64850) - No such process 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64850 is not found' 00:07:50.019 19:29:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:50.019 ************************************ 00:07:50.019 END TEST cpu_locks 00:07:50.019 ************************************ 00:07:50.019 00:07:50.019 real 0m59.642s 00:07:50.019 user 1m39.628s 00:07:50.019 sys 0m7.992s 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.019 19:29:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.019 19:29:40 event -- common/autotest_common.sh@1142 -- # return 0 00:07:50.019 ************************************ 00:07:50.019 END TEST event 00:07:50.019 ************************************ 00:07:50.019 00:07:50.019 real 1m33.178s 00:07:50.019 user 2m41.847s 00:07:50.019 sys 0m12.678s 00:07:50.019 19:29:40 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.019 19:29:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.019 19:29:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:50.019 19:29:40 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:50.019 19:29:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.019 19:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.019 19:29:40 -- common/autotest_common.sh@10 -- # set +x 00:07:50.019 ************************************ 00:07:50.019 START TEST thread 00:07:50.019 ************************************ 00:07:50.019 19:29:40 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:50.019 * Looking for test storage... 00:07:50.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:50.019 19:29:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.019 19:29:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:50.019 19:29:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.019 19:29:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.278 ************************************ 00:07:50.278 START TEST thread_poller_perf 00:07:50.278 ************************************ 00:07:50.278 19:29:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:50.278 [2024-07-15 19:29:40.871529] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:50.278 [2024-07-15 19:29:40.871721] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65043 ] 00:07:50.278 [2024-07-15 19:29:41.063737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.844 [2024-07-15 19:29:41.398676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.844 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:52.216 ====================================== 00:07:52.216 busy:2110961580 (cyc) 00:07:52.216 total_run_count: 344000 00:07:52.216 tsc_hz: 2100000000 (cyc) 00:07:52.216 ====================================== 00:07:52.216 poller_cost: 6136 (cyc), 2921 (nsec) 00:07:52.216 00:07:52.216 ************************************ 00:07:52.216 END TEST thread_poller_perf 00:07:52.216 ************************************ 00:07:52.216 real 0m2.070s 00:07:52.216 user 0m1.820s 00:07:52.216 sys 0m0.138s 00:07:52.216 19:29:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.216 19:29:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.216 19:29:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:52.216 19:29:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:52.216 19:29:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:52.216 19:29:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.216 19:29:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.216 ************************************ 00:07:52.216 START TEST thread_poller_perf 00:07:52.216 ************************************ 00:07:52.216 19:29:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:52.216 [2024-07-15 19:29:43.004744] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:52.216 [2024-07-15 19:29:43.004943] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65085 ] 00:07:52.474 [2024-07-15 19:29:43.206484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.731 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:52.731 [2024-07-15 19:29:43.472785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.648 ====================================== 00:07:54.648 busy:2103856520 (cyc) 00:07:54.648 total_run_count: 4525000 00:07:54.648 tsc_hz: 2100000000 (cyc) 00:07:54.648 ====================================== 00:07:54.648 poller_cost: 464 (cyc), 220 (nsec) 00:07:54.648 ************************************ 00:07:54.648 END TEST thread_poller_perf 00:07:54.648 ************************************ 00:07:54.648 00:07:54.648 real 0m2.012s 00:07:54.648 user 0m1.757s 00:07:54.648 sys 0m0.144s 00:07:54.648 19:29:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.648 19:29:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 19:29:45 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:54.648 19:29:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:54.648 ************************************ 00:07:54.648 END TEST thread 00:07:54.648 ************************************ 00:07:54.648 00:07:54.648 real 0m4.303s 00:07:54.648 user 0m3.654s 00:07:54.648 sys 0m0.421s 00:07:54.648 19:29:45 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.648 19:29:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 19:29:45 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.648 19:29:45 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:54.648 19:29:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.648 19:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.648 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 ************************************ 00:07:54.648 START TEST accel 00:07:54.648 ************************************ 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:54.648 * Looking for test storage... 00:07:54.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:54.648 19:29:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:54.648 19:29:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:54.648 19:29:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.648 19:29:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65170 00:07:54.648 19:29:45 accel -- accel/accel.sh@63 -- # waitforlisten 65170 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@829 -- # '[' -z 65170 ']' 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.648 19:29:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 19:29:45 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:54.648 19:29:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:54.648 19:29:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.648 19:29:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.648 19:29:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.648 19:29:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.648 19:29:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.648 19:29:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:54.648 19:29:45 accel -- accel/accel.sh@41 -- # jq -r . 00:07:54.648 [2024-07-15 19:29:45.302290] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:54.648 [2024-07-15 19:29:45.302499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65170 ] 00:07:54.906 [2024-07-15 19:29:45.485614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.165 [2024-07-15 19:29:45.723714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.101 19:29:46 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.101 19:29:46 accel -- common/autotest_common.sh@862 -- # return 0 00:07:56.101 19:29:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:56.101 19:29:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:56.101 19:29:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:56.101 19:29:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:56.101 19:29:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:56.101 19:29:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:56.101 19:29:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:56.101 19:29:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.101 19:29:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.101 19:29:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.101 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.101 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.101 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.101 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.101 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.101 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.101 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.101 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:56.102 19:29:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:56.102 19:29:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:56.102 19:29:46 accel -- accel/accel.sh@75 -- # killprocess 65170 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@948 -- # '[' -z 65170 ']' 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@952 -- # kill -0 65170 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@953 -- # uname 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65170 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65170' 00:07:56.102 killing process with pid 65170 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@967 -- # kill 65170 00:07:56.102 19:29:46 accel -- common/autotest_common.sh@972 -- # wait 65170 00:07:59.425 19:29:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:59.425 19:29:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.425 19:29:49 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:59.425 19:29:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:59.425 19:29:49 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.425 19:29:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.425 19:29:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.425 19:29:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.425 ************************************ 00:07:59.425 START TEST accel_missing_filename 00:07:59.425 ************************************ 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.425 19:29:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:59.425 19:29:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:59.425 [2024-07-15 19:29:49.662494] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:59.425 [2024-07-15 19:29:49.662905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65258 ] 00:07:59.425 [2024-07-15 19:29:49.850074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.425 [2024-07-15 19:29:50.097193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.682 [2024-07-15 19:29:50.358600] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.245 [2024-07-15 19:29:50.971225] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:00.822 A filename is required. 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.822 00:08:00.822 real 0m1.844s 00:08:00.822 user 0m1.567s 00:08:00.822 sys 0m0.205s 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.822 19:29:51 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:00.822 ************************************ 00:08:00.822 END TEST accel_missing_filename 00:08:00.822 ************************************ 00:08:00.822 19:29:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.822 19:29:51 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.822 19:29:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:00.822 19:29:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.822 19:29:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.822 ************************************ 00:08:00.822 START TEST accel_compress_verify 00:08:00.822 ************************************ 00:08:00.822 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.822 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:00.822 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.823 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:00.823 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.823 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:00.823 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.823 19:29:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:00.823 19:29:51 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:00.823 [2024-07-15 19:29:51.545940] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:00.823 [2024-07-15 19:29:51.546086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65295 ] 00:08:01.079 [2024-07-15 19:29:51.718809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.336 [2024-07-15 19:29:52.025673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.593 [2024-07-15 19:29:52.293217] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.526 [2024-07-15 19:29:52.954136] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:02.782 00:08:02.782 Compression does not support the verify option, aborting. 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.782 00:08:02.782 real 0m1.974s 00:08:02.782 user 0m1.707s 00:08:02.782 sys 0m0.183s 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.782 19:29:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:02.782 ************************************ 00:08:02.782 END TEST accel_compress_verify 00:08:02.782 ************************************ 00:08:02.782 19:29:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.782 19:29:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:02.782 19:29:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:02.782 19:29:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.782 19:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.782 ************************************ 00:08:02.782 START TEST accel_wrong_workload 00:08:02.782 ************************************ 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.782 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:02.782 19:29:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:03.040 Unsupported workload type: foobar 00:08:03.040 [2024-07-15 19:29:53.589040] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:03.040 accel_perf options: 00:08:03.040 [-h help message] 00:08:03.040 [-q queue depth per core] 00:08:03.040 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:03.040 [-T number of threads per core 00:08:03.040 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:03.040 [-t time in seconds] 00:08:03.040 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:03.040 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:03.040 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:03.040 [-l for compress/decompress workloads, name of uncompressed input file 00:08:03.040 [-S for crc32c workload, use this seed value (default 0) 00:08:03.040 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:03.040 [-f for fill workload, use this BYTE value (default 255) 00:08:03.040 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:03.040 [-y verify result if this switch is on] 00:08:03.040 [-a tasks to allocate per core (default: same value as -q)] 00:08:03.040 Can be used to spread operations across a wider range of memory. 00:08:03.040 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:03.040 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.040 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.040 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.040 00:08:03.040 real 0m0.094s 00:08:03.040 user 0m0.082s 00:08:03.040 sys 0m0.051s 00:08:03.040 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.040 ************************************ 00:08:03.040 END TEST accel_wrong_workload 00:08:03.040 ************************************ 00:08:03.040 19:29:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 19:29:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.040 19:29:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:03.040 19:29:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:03.040 19:29:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.041 19:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 ************************************ 00:08:03.041 START TEST accel_negative_buffers 00:08:03.041 ************************************ 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:03.041 19:29:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:03.041 -x option must be non-negative. 00:08:03.041 [2024-07-15 19:29:53.743383] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:03.041 accel_perf options: 00:08:03.041 [-h help message] 00:08:03.041 [-q queue depth per core] 00:08:03.041 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:03.041 [-T number of threads per core 00:08:03.041 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:03.041 [-t time in seconds] 00:08:03.041 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:03.041 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:03.041 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:03.041 [-l for compress/decompress workloads, name of uncompressed input file 00:08:03.041 [-S for crc32c workload, use this seed value (default 0) 00:08:03.041 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:03.041 [-f for fill workload, use this BYTE value (default 255) 00:08:03.041 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:03.041 [-y verify result if this switch is on] 00:08:03.041 [-a tasks to allocate per core (default: same value as -q)] 00:08:03.041 Can be used to spread operations across a wider range of memory. 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.041 00:08:03.041 real 0m0.089s 00:08:03.041 user 0m0.074s 00:08:03.041 sys 0m0.056s 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.041 ************************************ 00:08:03.041 END TEST accel_negative_buffers 00:08:03.041 ************************************ 00:08:03.041 19:29:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 19:29:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.041 19:29:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:03.041 19:29:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:03.041 19:29:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.041 19:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 ************************************ 00:08:03.041 START TEST accel_crc32c 00:08:03.041 ************************************ 00:08:03.041 19:29:53 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:03.041 19:29:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:03.041 19:29:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:03.041 19:29:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.041 19:29:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.041 19:29:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:03.299 19:29:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:03.299 [2024-07-15 19:29:53.879253] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:03.299 [2024-07-15 19:29:53.879401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65378 ] 00:08:03.299 [2024-07-15 19:29:54.045551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.557 [2024-07-15 19:29:54.305194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.814 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:03.815 19:29:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:06.422 19:29:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.422 00:08:06.422 real 0m2.881s 00:08:06.422 user 0m2.604s 00:08:06.422 sys 0m0.182s 00:08:06.422 19:29:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.422 19:29:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:06.422 ************************************ 00:08:06.422 END TEST accel_crc32c 00:08:06.422 ************************************ 00:08:06.422 19:29:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.422 19:29:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:06.422 19:29:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:06.422 19:29:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.422 19:29:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.422 ************************************ 00:08:06.422 START TEST accel_crc32c_C2 00:08:06.422 ************************************ 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:06.422 19:29:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:06.422 [2024-07-15 19:29:56.813015] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:06.422 [2024-07-15 19:29:56.813147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65425 ] 00:08:06.422 [2024-07-15 19:29:56.976927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.771 [2024-07-15 19:29:57.248275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.054 19:29:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.030 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.031 00:08:09.031 real 0m2.879s 00:08:09.031 user 0m2.597s 00:08:09.031 sys 0m0.186s 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.031 19:29:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:09.031 ************************************ 00:08:09.031 END TEST accel_crc32c_C2 00:08:09.031 ************************************ 00:08:09.031 19:29:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.031 19:29:59 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:09.031 19:29:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:09.031 19:29:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.031 19:29:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.031 ************************************ 00:08:09.031 START TEST accel_copy 00:08:09.031 ************************************ 00:08:09.031 19:29:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:09.031 19:29:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:09.031 [2024-07-15 19:29:59.744860] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:09.031 [2024-07-15 19:29:59.744982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65477 ] 00:08:09.291 [2024-07-15 19:29:59.911204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.548 [2024-07-15 19:30:00.167426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.805 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.806 19:30:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:12.334 19:30:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.334 00:08:12.334 real 0m2.884s 00:08:12.334 user 0m2.604s 00:08:12.334 sys 0m0.183s 00:08:12.334 19:30:02 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.334 19:30:02 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.334 ************************************ 00:08:12.334 END TEST accel_copy 00:08:12.334 ************************************ 00:08:12.334 19:30:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.334 19:30:02 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.334 19:30:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:12.334 19:30:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.334 19:30:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.334 ************************************ 00:08:12.334 START TEST accel_fill 00:08:12.334 ************************************ 00:08:12.334 19:30:02 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:12.334 19:30:02 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:12.334 [2024-07-15 19:30:02.695594] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:12.334 [2024-07-15 19:30:02.696056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65523 ] 00:08:12.334 [2024-07-15 19:30:02.876840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.592 [2024-07-15 19:30:03.133552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.851 19:30:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:14.749 19:30:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.749 00:08:14.749 real 0m2.877s 00:08:14.749 user 0m2.570s 00:08:14.750 sys 0m0.210s 00:08:14.750 19:30:05 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.750 19:30:05 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:14.750 ************************************ 00:08:14.750 END TEST accel_fill 00:08:14.750 ************************************ 00:08:15.007 19:30:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.007 19:30:05 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:15.007 19:30:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:15.007 19:30:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.007 19:30:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.007 ************************************ 00:08:15.007 START TEST accel_copy_crc32c 00:08:15.007 ************************************ 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:15.007 19:30:05 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:15.007 [2024-07-15 19:30:05.624954] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:15.007 [2024-07-15 19:30:05.625129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65574 ] 00:08:15.264 [2024-07-15 19:30:05.807229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.523 [2024-07-15 19:30:06.062519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.782 19:30:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.708 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.968 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.968 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:17.968 19:30:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.968 00:08:17.968 real 0m2.946s 00:08:17.968 user 0m2.637s 00:08:17.968 sys 0m0.206s 00:08:17.968 19:30:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.968 19:30:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:17.968 ************************************ 00:08:17.968 END TEST accel_copy_crc32c 00:08:17.968 ************************************ 00:08:17.968 19:30:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.968 19:30:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:17.968 19:30:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:17.968 19:30:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.968 19:30:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.968 ************************************ 00:08:17.968 START TEST accel_copy_crc32c_C2 00:08:17.968 ************************************ 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:17.968 19:30:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:17.968 [2024-07-15 19:30:08.627870] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:17.968 [2024-07-15 19:30:08.628046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65622 ] 00:08:18.225 [2024-07-15 19:30:08.814337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.483 [2024-07-15 19:30:09.160905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.742 19:30:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.331 ************************************ 00:08:21.331 END TEST accel_copy_crc32c_C2 00:08:21.331 ************************************ 00:08:21.331 00:08:21.331 real 0m2.987s 00:08:21.331 user 0m0.014s 00:08:21.331 sys 0m0.005s 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.331 19:30:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:21.331 19:30:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.331 19:30:11 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:21.331 19:30:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:21.331 19:30:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.331 19:30:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.331 ************************************ 00:08:21.331 START TEST accel_dualcast 00:08:21.331 ************************************ 00:08:21.331 19:30:11 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:21.331 19:30:11 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:21.331 [2024-07-15 19:30:11.668064] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:21.331 [2024-07-15 19:30:11.668236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65674 ] 00:08:21.331 [2024-07-15 19:30:11.851758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.604 [2024-07-15 19:30:12.108623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.604 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.862 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.863 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.863 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.863 19:30:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.863 19:30:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.863 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.863 19:30:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.759 19:30:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:23.760 19:30:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.760 00:08:23.760 real 0m2.902s 00:08:23.760 user 0m2.600s 00:08:23.760 sys 0m0.201s 00:08:23.760 19:30:14 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.760 19:30:14 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:23.760 ************************************ 00:08:23.760 END TEST accel_dualcast 00:08:23.760 ************************************ 00:08:24.017 19:30:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.017 19:30:14 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:24.017 19:30:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:24.017 19:30:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.017 19:30:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.017 ************************************ 00:08:24.017 START TEST accel_compare 00:08:24.017 ************************************ 00:08:24.017 19:30:14 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:24.017 19:30:14 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:24.017 [2024-07-15 19:30:14.627525] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:24.017 [2024-07-15 19:30:14.627694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65726 ] 00:08:24.274 [2024-07-15 19:30:14.809835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.532 [2024-07-15 19:30:15.088455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:24.789 19:30:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:27.319 19:30:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.319 ************************************ 00:08:27.319 END TEST accel_compare 00:08:27.319 ************************************ 00:08:27.319 00:08:27.319 real 0m3.056s 00:08:27.319 user 0m2.738s 00:08:27.319 sys 0m0.215s 00:08:27.319 19:30:17 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.319 19:30:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:27.319 19:30:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.319 19:30:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:27.319 19:30:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:27.319 19:30:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.319 19:30:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.319 ************************************ 00:08:27.319 START TEST accel_xor 00:08:27.319 ************************************ 00:08:27.319 19:30:17 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:27.319 19:30:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:27.319 [2024-07-15 19:30:17.724937] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:27.319 [2024-07-15 19:30:17.725071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65778 ] 00:08:27.319 [2024-07-15 19:30:17.900606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.576 [2024-07-15 19:30:18.233846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.834 19:30:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:30.368 19:30:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.368 00:08:30.368 real 0m3.047s 00:08:30.369 user 0m2.758s 00:08:30.369 sys 0m0.192s 00:08:30.369 19:30:20 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.369 19:30:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:30.369 ************************************ 00:08:30.369 END TEST accel_xor 00:08:30.369 ************************************ 00:08:30.369 19:30:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.369 19:30:20 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:30.369 19:30:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:30.369 19:30:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.369 19:30:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.369 ************************************ 00:08:30.369 START TEST accel_xor 00:08:30.369 ************************************ 00:08:30.369 19:30:20 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:30.369 19:30:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:30.369 [2024-07-15 19:30:20.826218] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:30.369 [2024-07-15 19:30:20.826364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65830 ] 00:08:30.369 [2024-07-15 19:30:20.993365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.626 [2024-07-15 19:30:21.263429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.885 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.886 19:30:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:33.412 19:30:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.412 00:08:33.412 real 0m2.968s 00:08:33.412 user 0m2.690s 00:08:33.412 sys 0m0.175s 00:08:33.412 19:30:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.412 19:30:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:33.412 ************************************ 00:08:33.412 END TEST accel_xor 00:08:33.412 ************************************ 00:08:33.412 19:30:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.412 19:30:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:33.412 19:30:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:33.412 19:30:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.412 19:30:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.412 ************************************ 00:08:33.412 START TEST accel_dif_verify 00:08:33.412 ************************************ 00:08:33.412 19:30:23 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:33.412 19:30:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:33.412 [2024-07-15 19:30:23.858556] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:33.413 [2024-07-15 19:30:23.858749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65881 ] 00:08:33.413 [2024-07-15 19:30:24.047718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.671 [2024-07-15 19:30:24.309665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.959 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.960 19:30:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:36.492 19:30:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.492 00:08:36.492 real 0m2.988s 00:08:36.492 user 0m0.017s 00:08:36.492 sys 0m0.000s 00:08:36.492 ************************************ 00:08:36.492 END TEST accel_dif_verify 00:08:36.492 ************************************ 00:08:36.492 19:30:26 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.492 19:30:26 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:36.492 19:30:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.492 19:30:26 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:36.492 19:30:26 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:36.492 19:30:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.492 19:30:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.492 ************************************ 00:08:36.492 START TEST accel_dif_generate 00:08:36.492 ************************************ 00:08:36.492 19:30:26 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:36.492 19:30:26 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:36.492 [2024-07-15 19:30:26.895973] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:36.492 [2024-07-15 19:30:26.896224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65929 ] 00:08:36.492 [2024-07-15 19:30:27.086714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.750 [2024-07-15 19:30:27.363921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.008 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.009 19:30:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:39.566 ************************************ 00:08:39.566 END TEST accel_dif_generate 00:08:39.566 ************************************ 00:08:39.566 19:30:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.566 00:08:39.566 real 0m2.992s 00:08:39.566 user 0m2.686s 00:08:39.566 sys 0m0.204s 00:08:39.566 19:30:29 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.566 19:30:29 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:39.566 19:30:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.566 19:30:29 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:39.566 19:30:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:39.566 19:30:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.566 19:30:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.566 ************************************ 00:08:39.566 START TEST accel_dif_generate_copy 00:08:39.566 ************************************ 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:39.566 19:30:29 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:39.567 [2024-07-15 19:30:29.925150] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:39.567 [2024-07-15 19:30:29.925298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65981 ] 00:08:39.567 [2024-07-15 19:30:30.093047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.824 [2024-07-15 19:30:30.393609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.083 19:30:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.606 ************************************ 00:08:42.606 END TEST accel_dif_generate_copy 00:08:42.606 ************************************ 00:08:42.606 00:08:42.606 real 0m2.995s 00:08:42.606 user 0m2.700s 00:08:42.606 sys 0m0.188s 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.606 19:30:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.606 19:30:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:42.606 19:30:32 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:42.606 19:30:32 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.606 19:30:32 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:42.606 19:30:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.606 19:30:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.606 ************************************ 00:08:42.606 START TEST accel_comp 00:08:42.606 ************************************ 00:08:42.606 19:30:32 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:42.606 19:30:32 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:42.606 [2024-07-15 19:30:32.974922] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:42.607 [2024-07-15 19:30:32.975097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66033 ] 00:08:42.607 [2024-07-15 19:30:33.148922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.863 [2024-07-15 19:30:33.426688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:43.122 19:30:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:45.650 19:30:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.650 00:08:45.650 real 0m3.029s 00:08:45.650 user 0m0.012s 00:08:45.650 sys 0m0.002s 00:08:45.650 ************************************ 00:08:45.650 END TEST accel_comp 00:08:45.650 ************************************ 00:08:45.650 19:30:35 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.650 19:30:35 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:45.650 19:30:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:45.650 19:30:35 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.650 19:30:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:45.650 19:30:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.650 19:30:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:45.650 ************************************ 00:08:45.650 START TEST accel_decomp 00:08:45.650 ************************************ 00:08:45.650 19:30:35 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:45.650 19:30:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:45.650 [2024-07-15 19:30:36.049290] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:45.650 [2024-07-15 19:30:36.049431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66085 ] 00:08:45.650 [2024-07-15 19:30:36.219675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.908 [2024-07-15 19:30:36.494794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.167 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.168 19:30:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:48.696 19:30:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.696 00:08:48.696 real 0m3.022s 00:08:48.696 user 0m2.714s 00:08:48.696 sys 0m0.202s 00:08:48.696 19:30:39 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.696 ************************************ 00:08:48.696 END TEST accel_decomp 00:08:48.696 ************************************ 00:08:48.696 19:30:39 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 19:30:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:48.696 19:30:39 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.696 19:30:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:48.696 19:30:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.696 19:30:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 ************************************ 00:08:48.696 START TEST accel_decomp_full 00:08:48.696 ************************************ 00:08:48.696 19:30:39 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:48.696 19:30:39 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:48.696 [2024-07-15 19:30:39.127271] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:48.696 [2024-07-15 19:30:39.127447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66137 ] 00:08:48.696 [2024-07-15 19:30:39.316276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.954 [2024-07-15 19:30:39.592545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.212 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.213 19:30:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:51.833 19:30:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:51.833 00:08:51.833 real 0m3.057s 00:08:51.833 user 0m2.762s 00:08:51.833 sys 0m0.196s 00:08:51.833 19:30:42 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.833 ************************************ 00:08:51.833 END TEST accel_decomp_full 00:08:51.833 ************************************ 00:08:51.833 19:30:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 19:30:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:51.833 19:30:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.833 19:30:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:51.833 19:30:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.833 19:30:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 ************************************ 00:08:51.833 START TEST accel_decomp_mcore 00:08:51.833 ************************************ 00:08:51.833 19:30:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.833 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:51.833 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:51.833 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:51.833 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:51.833 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:51.834 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:51.834 [2024-07-15 19:30:42.221596] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:51.834 [2024-07-15 19:30:42.221750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66189 ] 00:08:51.834 [2024-07-15 19:30:42.389251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.091 [2024-07-15 19:30:42.660298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.091 [2024-07-15 19:30:42.660370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.091 [2024-07-15 19:30:42.660511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.091 [2024-07-15 19:30:42.660519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.349 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.349 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.350 19:30:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.880 00:08:54.880 real 0m3.032s 00:08:54.880 user 0m0.015s 00:08:54.880 sys 0m0.002s 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.880 ************************************ 00:08:54.880 END TEST accel_decomp_mcore 00:08:54.880 ************************************ 00:08:54.880 19:30:45 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:54.880 19:30:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:54.880 19:30:45 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.880 19:30:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:54.880 19:30:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.880 19:30:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.880 ************************************ 00:08:54.880 START TEST accel_decomp_full_mcore 00:08:54.880 ************************************ 00:08:54.880 19:30:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.880 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:54.880 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:54.880 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.880 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.880 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:54.881 19:30:45 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:54.881 [2024-07-15 19:30:45.304006] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:54.881 [2024-07-15 19:30:45.304758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66243 ] 00:08:54.881 [2024-07-15 19:30:45.474967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.139 [2024-07-15 19:30:45.749088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.139 [2024-07-15 19:30:45.749155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.139 [2024-07-15 19:30:45.749230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.139 [2024-07-15 19:30:45.749235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.398 19:30:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.001 00:08:58.001 real 0m3.058s 00:08:58.001 user 0m0.022s 00:08:58.001 sys 0m0.004s 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.001 19:30:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:58.001 ************************************ 00:08:58.001 END TEST accel_decomp_full_mcore 00:08:58.001 ************************************ 00:08:58.001 19:30:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:58.001 19:30:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:58.001 19:30:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:58.001 19:30:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.001 19:30:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.001 ************************************ 00:08:58.001 START TEST accel_decomp_mthread 00:08:58.001 ************************************ 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:58.001 19:30:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:58.001 [2024-07-15 19:30:48.433031] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:58.001 [2024-07-15 19:30:48.433209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66294 ] 00:08:58.001 [2024-07-15 19:30:48.620331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.261 [2024-07-15 19:30:48.891714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:58.520 19:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:01.085 00:09:01.085 real 0m3.031s 00:09:01.085 user 0m2.719s 00:09:01.085 sys 0m0.212s 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.085 19:30:51 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:01.085 ************************************ 00:09:01.085 END TEST accel_decomp_mthread 00:09:01.085 ************************************ 00:09:01.085 19:30:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:01.085 19:30:51 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:01.085 19:30:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:01.085 19:30:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.085 19:30:51 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.085 ************************************ 00:09:01.085 START TEST accel_decomp_full_mthread 00:09:01.085 ************************************ 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:01.086 19:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:01.086 [2024-07-15 19:30:51.515726] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:01.086 [2024-07-15 19:30:51.515867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66346 ] 00:09:01.086 [2024-07-15 19:30:51.691753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.343 [2024-07-15 19:30:52.002615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:01.600 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.601 19:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:04.179 00:09:04.179 real 0m3.118s 00:09:04.179 user 0m0.020s 00:09:04.179 sys 0m0.004s 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.179 19:30:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:04.179 ************************************ 00:09:04.179 END TEST accel_decomp_full_mthread 00:09:04.179 ************************************ 00:09:04.179 19:30:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:04.179 19:30:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:04.179 19:30:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:04.179 19:30:54 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:04.179 19:30:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.179 19:30:54 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.179 19:30:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:04.179 19:30:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.179 19:30:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.179 19:30:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.179 19:30:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.179 19:30:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:04.179 19:30:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:04.179 19:30:54 accel -- accel/accel.sh@41 -- # jq -r . 00:09:04.179 ************************************ 00:09:04.179 START TEST accel_dif_functional_tests 00:09:04.179 ************************************ 00:09:04.179 19:30:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:04.179 [2024-07-15 19:30:54.708681] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:04.179 [2024-07-15 19:30:54.708846] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66399 ] 00:09:04.179 [2024-07-15 19:30:54.879725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.468 [2024-07-15 19:30:55.149130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.468 [2024-07-15 19:30:55.149239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.468 [2024-07-15 19:30:55.149252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.033 00:09:05.033 00:09:05.033 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.033 http://cunit.sourceforge.net/ 00:09:05.033 00:09:05.033 00:09:05.033 Suite: accel_dif 00:09:05.033 Test: verify: DIF generated, GUARD check ...passed 00:09:05.033 Test: verify: DIF generated, APPTAG check ...passed 00:09:05.033 Test: verify: DIF generated, REFTAG check ...passed 00:09:05.033 Test: verify: DIF not generated, GUARD check ...[2024-07-15 19:30:55.580062] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:05.033 passed 00:09:05.033 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:30:55.580361] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:05.033 passed 00:09:05.033 Test: verify: DIF not generated, REFTAG check ...passed 00:09:05.033 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:05.033 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:30:55.580480] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:05.033 passed 00:09:05.033 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 19:30:55.580637] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:05.033 passed 00:09:05.033 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:05.033 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:05.033 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 19:30:55.580929] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:05.033 passed 00:09:05.033 Test: verify copy: DIF generated, GUARD check ...passed 00:09:05.033 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:05.033 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:05.033 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 19:30:55.581323] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:05.033 passed 00:09:05.033 Test: verify copy: DIF not generated, APPTAG check ...passed 00:09:05.033 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 19:30:55.581452] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:05.033 [2024-07-15 19:30:55.581570] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:05.033 passed 00:09:05.033 Test: generate copy: DIF generated, GUARD check ...passed 00:09:05.033 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:05.033 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:05.033 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:05.033 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:05.033 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:05.033 Test: generate copy: iovecs-len validate ...passed 00:09:05.033 Test: generate copy: buffer alignment validate ...passed 00:09:05.033 00:09:05.033 [2024-07-15 19:30:55.582025] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:05.033 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.033 suites 1 1 n/a 0 0 00:09:05.033 tests 26 26 26 0 0 00:09:05.033 asserts 115 115 115 0 n/a 00:09:05.033 00:09:05.033 Elapsed time = 0.007 seconds 00:09:06.450 00:09:06.450 real 0m2.521s 00:09:06.450 user 0m5.048s 00:09:06.450 sys 0m0.271s 00:09:06.450 19:30:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.450 19:30:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:06.450 ************************************ 00:09:06.450 END TEST accel_dif_functional_tests 00:09:06.450 ************************************ 00:09:06.450 19:30:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:06.450 00:09:06.450 real 1m12.110s 00:09:06.450 user 1m18.931s 00:09:06.450 sys 0m6.274s 00:09:06.450 19:30:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.450 19:30:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.450 ************************************ 00:09:06.450 END TEST accel 00:09:06.450 ************************************ 00:09:06.450 19:30:57 -- common/autotest_common.sh@1142 -- # return 0 00:09:06.450 19:30:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:06.450 19:30:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.450 19:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.450 19:30:57 -- common/autotest_common.sh@10 -- # set +x 00:09:06.450 ************************************ 00:09:06.450 START TEST accel_rpc 00:09:06.450 ************************************ 00:09:06.450 19:30:57 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:06.709 * Looking for test storage... 00:09:06.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:06.709 19:30:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:06.709 19:30:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66492 00:09:06.709 19:30:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:06.709 19:30:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66492 00:09:06.709 19:30:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66492 ']' 00:09:06.709 19:30:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.709 19:30:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.709 19:30:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.709 19:30:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.709 19:30:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 [2024-07-15 19:30:57.470208] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:06.709 [2024-07-15 19:30:57.470403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66492 ] 00:09:06.967 [2024-07-15 19:30:57.654878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.225 [2024-07-15 19:30:57.932587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.483 19:30:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.483 19:30:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:07.483 19:30:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:07.483 19:30:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:07.483 19:30:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:07.483 19:30:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:07.483 19:30:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:07.483 19:30:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:07.483 19:30:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.483 19:30:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.483 ************************************ 00:09:07.483 START TEST accel_assign_opcode 00:09:07.483 ************************************ 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.483 [2024-07-15 19:30:58.253578] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.483 [2024-07-15 19:30:58.261549] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.483 19:30:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.854 software 00:09:08.854 00:09:08.854 real 0m1.064s 00:09:08.854 user 0m0.038s 00:09:08.854 sys 0m0.011s 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.854 ************************************ 00:09:08.854 END TEST accel_assign_opcode 00:09:08.854 ************************************ 00:09:08.854 19:30:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:08.854 19:30:59 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66492 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66492 ']' 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66492 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66492 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.854 killing process with pid 66492 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66492' 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@967 -- # kill 66492 00:09:08.854 19:30:59 accel_rpc -- common/autotest_common.sh@972 -- # wait 66492 00:09:12.135 00:09:12.135 real 0m5.143s 00:09:12.135 user 0m4.963s 00:09:12.135 sys 0m0.568s 00:09:12.135 19:31:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.135 19:31:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 ************************************ 00:09:12.135 END TEST accel_rpc 00:09:12.135 ************************************ 00:09:12.135 19:31:02 -- common/autotest_common.sh@1142 -- # return 0 00:09:12.135 19:31:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:12.135 19:31:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.135 19:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.135 19:31:02 -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 ************************************ 00:09:12.135 START TEST app_cmdline 00:09:12.135 ************************************ 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:12.135 * Looking for test storage... 00:09:12.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:12.135 19:31:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:12.135 19:31:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66614 00:09:12.135 19:31:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66614 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66614 ']' 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.135 19:31:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.135 19:31:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 [2024-07-15 19:31:02.611008] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:12.135 [2024-07-15 19:31:02.611149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66614 ] 00:09:12.135 [2024-07-15 19:31:02.784126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.392 [2024-07-15 19:31:03.038380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.323 19:31:04 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.323 19:31:04 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:13.323 19:31:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:13.580 { 00:09:13.580 "version": "SPDK v24.09-pre git sha1 996bd8752", 00:09:13.580 "fields": { 00:09:13.580 "major": 24, 00:09:13.580 "minor": 9, 00:09:13.580 "patch": 0, 00:09:13.580 "suffix": "-pre", 00:09:13.580 "commit": "996bd8752" 00:09:13.580 } 00:09:13.580 } 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:13.580 19:31:04 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.580 19:31:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:13.580 19:31:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:13.580 19:31:04 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.839 19:31:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:13.839 19:31:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:13.839 19:31:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:13.839 19:31:04 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:14.107 request: 00:09:14.107 { 00:09:14.107 "method": "env_dpdk_get_mem_stats", 00:09:14.107 "req_id": 1 00:09:14.107 } 00:09:14.107 Got JSON-RPC error response 00:09:14.107 response: 00:09:14.107 { 00:09:14.107 "code": -32601, 00:09:14.107 "message": "Method not found" 00:09:14.107 } 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.107 19:31:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66614 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66614 ']' 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66614 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66614 00:09:14.107 killing process with pid 66614 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66614' 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 66614 00:09:14.107 19:31:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 66614 00:09:17.394 00:09:17.394 real 0m5.309s 00:09:17.394 user 0m5.650s 00:09:17.394 sys 0m0.642s 00:09:17.394 19:31:07 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.394 19:31:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:17.394 ************************************ 00:09:17.394 END TEST app_cmdline 00:09:17.394 ************************************ 00:09:17.394 19:31:07 -- common/autotest_common.sh@1142 -- # return 0 00:09:17.394 19:31:07 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:17.394 19:31:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.394 19:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.394 19:31:07 -- common/autotest_common.sh@10 -- # set +x 00:09:17.394 ************************************ 00:09:17.394 START TEST version 00:09:17.394 ************************************ 00:09:17.394 19:31:07 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:17.394 * Looking for test storage... 00:09:17.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:17.394 19:31:07 version -- app/version.sh@17 -- # get_header_version major 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # cut -f2 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:17.394 19:31:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:17.394 19:31:07 version -- app/version.sh@17 -- # major=24 00:09:17.394 19:31:07 version -- app/version.sh@18 -- # get_header_version minor 00:09:17.394 19:31:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # cut -f2 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:17.394 19:31:07 version -- app/version.sh@18 -- # minor=9 00:09:17.394 19:31:07 version -- app/version.sh@19 -- # get_header_version patch 00:09:17.394 19:31:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # cut -f2 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:17.394 19:31:07 version -- app/version.sh@19 -- # patch=0 00:09:17.394 19:31:07 version -- app/version.sh@20 -- # get_header_version suffix 00:09:17.394 19:31:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # cut -f2 00:09:17.394 19:31:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:17.394 19:31:07 version -- app/version.sh@20 -- # suffix=-pre 00:09:17.394 19:31:07 version -- app/version.sh@22 -- # version=24.9 00:09:17.394 19:31:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:17.394 19:31:07 version -- app/version.sh@28 -- # version=24.9rc0 00:09:17.394 19:31:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:17.394 19:31:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:17.394 19:31:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:17.394 19:31:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:17.394 00:09:17.394 real 0m0.159s 00:09:17.394 user 0m0.088s 00:09:17.394 sys 0m0.100s 00:09:17.394 19:31:07 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.394 19:31:07 version -- common/autotest_common.sh@10 -- # set +x 00:09:17.394 ************************************ 00:09:17.394 END TEST version 00:09:17.394 ************************************ 00:09:17.394 19:31:07 -- common/autotest_common.sh@1142 -- # return 0 00:09:17.394 19:31:07 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:17.394 19:31:07 -- spdk/autotest.sh@198 -- # uname -s 00:09:17.394 19:31:07 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:17.394 19:31:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:17.394 19:31:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:17.394 19:31:07 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:09:17.394 19:31:07 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:17.394 19:31:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.394 19:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.394 19:31:07 -- common/autotest_common.sh@10 -- # set +x 00:09:17.394 ************************************ 00:09:17.394 START TEST blockdev_nvme 00:09:17.394 ************************************ 00:09:17.394 19:31:07 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:17.394 * Looking for test storage... 00:09:17.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:17.394 19:31:08 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66796 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:17.394 19:31:08 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66796 00:09:17.394 19:31:08 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66796 ']' 00:09:17.394 19:31:08 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.394 19:31:08 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.394 19:31:08 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.394 19:31:08 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.394 19:31:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.653 [2024-07-15 19:31:08.214362] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:17.653 [2024-07-15 19:31:08.214590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66796 ] 00:09:17.653 [2024-07-15 19:31:08.403450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.910 [2024-07-15 19:31:08.679921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.284 19:31:09 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.284 19:31:09 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:09:19.284 19:31:09 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:19.284 19:31:09 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:09:19.284 19:31:09 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:19.284 19:31:09 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:19.284 19:31:09 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:19.284 19:31:09 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:19.284 19:31:09 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.284 19:31:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.542 19:31:10 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.542 19:31:10 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:09:19.542 19:31:10 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.542 19:31:10 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.542 19:31:10 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.543 19:31:10 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:19.543 19:31:10 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.543 19:31:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.543 19:31:10 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.543 19:31:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:19.543 19:31:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:19.543 19:31:10 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.543 19:31:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.543 19:31:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:19.543 19:31:10 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.801 19:31:10 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:19.801 19:31:10 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:19.802 19:31:10 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c6cf623b-0e49-423e-bd54-72200f252a27"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c6cf623b-0e49-423e-bd54-72200f252a27",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9c3adb96-e011-4f89-b762-5f56faa7bae2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9c3adb96-e011-4f89-b762-5f56faa7bae2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "74b609d6-3f50-4c58-9f5a-06d06cb8f0cb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74b609d6-3f50-4c58-9f5a-06d06cb8f0cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e1713567-6f58-42ce-a654-a2c3dc8a84ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1713567-6f58-42ce-a654-a2c3dc8a84ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3fc7f6cf-8571-48db-b613-228eb6877a8c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3fc7f6cf-8571-48db-b613-228eb6877a8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b73c8152-2c5d-490d-973a-8caa8d5d5da5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b73c8152-2c5d-490d-973a-8caa8d5d5da5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:19.802 19:31:10 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:19.802 19:31:10 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:09:19.802 19:31:10 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:19.802 19:31:10 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66796 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66796 ']' 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66796 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66796 00:09:19.802 killing process with pid 66796 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66796' 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66796 00:09:19.802 19:31:10 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66796 00:09:23.080 19:31:13 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:23.080 19:31:13 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:23.080 19:31:13 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:23.080 19:31:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.080 19:31:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.080 ************************************ 00:09:23.080 START TEST bdev_hello_world 00:09:23.080 ************************************ 00:09:23.080 19:31:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:23.080 [2024-07-15 19:31:13.501828] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:23.080 [2024-07-15 19:31:13.501977] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66904 ] 00:09:23.080 [2024-07-15 19:31:13.680865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.337 [2024-07-15 19:31:14.021731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.268 [2024-07-15 19:31:14.815810] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:24.268 [2024-07-15 19:31:14.815877] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:24.268 [2024-07-15 19:31:14.815912] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:24.268 [2024-07-15 19:31:14.819574] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:24.268 [2024-07-15 19:31:14.820212] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:24.268 [2024-07-15 19:31:14.820252] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:24.268 [2024-07-15 19:31:14.820397] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:24.268 00:09:24.268 [2024-07-15 19:31:14.820425] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:25.642 ************************************ 00:09:25.642 END TEST bdev_hello_world 00:09:25.642 ************************************ 00:09:25.642 00:09:25.642 real 0m2.829s 00:09:25.642 user 0m2.418s 00:09:25.642 sys 0m0.293s 00:09:25.642 19:31:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.642 19:31:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:25.642 19:31:16 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:25.642 19:31:16 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:25.642 19:31:16 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:25.642 19:31:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.642 19:31:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.642 ************************************ 00:09:25.642 START TEST bdev_bounds 00:09:25.642 ************************************ 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66957 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66957' 00:09:25.642 Process bdevio pid: 66957 00:09:25.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66957 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66957 ']' 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.642 19:31:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:25.642 [2024-07-15 19:31:16.401325] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:25.642 [2024-07-15 19:31:16.402586] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66957 ] 00:09:25.900 [2024-07-15 19:31:16.576940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.158 [2024-07-15 19:31:16.851060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.158 [2024-07-15 19:31:16.851169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.158 [2024-07-15 19:31:16.851127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.094 19:31:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.094 19:31:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:27.094 19:31:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:27.094 I/O targets: 00:09:27.094 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:27.094 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:27.094 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:27.094 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:27.094 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:27.094 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:27.094 00:09:27.094 00:09:27.094 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.094 http://cunit.sourceforge.net/ 00:09:27.094 00:09:27.094 00:09:27.094 Suite: bdevio tests on: Nvme3n1 00:09:27.094 Test: blockdev write read block ...passed 00:09:27.094 Test: blockdev write zeroes read block ...passed 00:09:27.094 Test: blockdev write zeroes read no split ...passed 00:09:27.094 Test: blockdev write zeroes read split ...passed 00:09:27.094 Test: blockdev write zeroes read split partial ...passed 00:09:27.094 Test: blockdev reset ...[2024-07-15 19:31:17.861120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:27.094 passed 00:09:27.094 Test: blockdev write read 8 blocks ...[2024-07-15 19:31:17.865546] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.094 passed 00:09:27.094 Test: blockdev write read size > 128k ...passed 00:09:27.094 Test: blockdev write read invalid size ...passed 00:09:27.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.094 Test: blockdev write read max offset ...passed 00:09:27.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.094 Test: blockdev writev readv 8 blocks ...passed 00:09:27.094 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.094 Test: blockdev writev readv block ...passed 00:09:27.094 Test: blockdev writev readv size > 128k ...passed 00:09:27.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.094 Test: blockdev comparev and writev ...[2024-07-15 19:31:17.874833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26980a000 len:0x1000 00:09:27.094 [2024-07-15 19:31:17.874912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.094 passed 00:09:27.094 Test: blockdev nvme passthru rw ...passed 00:09:27.094 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:31:17.875630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.094 passed 00:09:27.094 Test: blockdev nvme admin passthru ...[2024-07-15 19:31:17.875677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.094 passed 00:09:27.094 Test: blockdev copy ...passed 00:09:27.094 Suite: bdevio tests on: Nvme2n3 00:09:27.094 Test: blockdev write read block ...passed 00:09:27.359 Test: blockdev write zeroes read block ...passed 00:09:27.359 Test: blockdev write zeroes read no split ...passed 00:09:27.359 Test: blockdev write zeroes read split ...passed 00:09:27.359 Test: blockdev write zeroes read split partial ...passed 00:09:27.359 Test: blockdev reset ...[2024-07-15 19:31:17.986680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:27.359 [2024-07-15 19:31:17.991498] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.359 passed 00:09:27.359 Test: blockdev write read 8 blocks ...passed 00:09:27.360 Test: blockdev write read size > 128k ...passed 00:09:27.360 Test: blockdev write read invalid size ...passed 00:09:27.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.360 Test: blockdev write read max offset ...passed 00:09:27.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.360 Test: blockdev writev readv 8 blocks ...passed 00:09:27.360 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.360 Test: blockdev writev readv block ...passed 00:09:27.360 Test: blockdev writev readv size > 128k ...passed 00:09:27.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.360 Test: blockdev comparev and writev ...[2024-07-15 19:31:18.001562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279004000 len:0x1000 00:09:27.360 [2024-07-15 19:31:18.001818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.360 passed 00:09:27.360 Test: blockdev nvme passthru rw ...passed 00:09:27.360 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.360 Test: blockdev nvme admin passthru ...[2024-07-15 19:31:18.002945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.360 [2024-07-15 19:31:18.003001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.360 passed 00:09:27.360 Test: blockdev copy ...passed 00:09:27.360 Suite: bdevio tests on: Nvme2n2 00:09:27.360 Test: blockdev write read block ...passed 00:09:27.360 Test: blockdev write zeroes read block ...passed 00:09:27.360 Test: blockdev write zeroes read no split ...passed 00:09:27.360 Test: blockdev write zeroes read split ...passed 00:09:27.360 Test: blockdev write zeroes read split partial ...passed 00:09:27.360 Test: blockdev reset ...[2024-07-15 19:31:18.111383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:27.360 [2024-07-15 19:31:18.116596] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.360 passed 00:09:27.360 Test: blockdev write read 8 blocks ...passed 00:09:27.360 Test: blockdev write read size > 128k ...passed 00:09:27.360 Test: blockdev write read invalid size ...passed 00:09:27.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.360 Test: blockdev write read max offset ...passed 00:09:27.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.360 Test: blockdev writev readv 8 blocks ...passed 00:09:27.360 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.360 Test: blockdev writev readv block ...passed 00:09:27.360 Test: blockdev writev readv size > 128k ...passed 00:09:27.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.360 Test: blockdev comparev and writev ...[2024-07-15 19:31:18.125293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x275a3a000 len:0x1000 00:09:27.360 [2024-07-15 19:31:18.125361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.360 passed 00:09:27.360 Test: blockdev nvme passthru rw ...passed 00:09:27.360 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:31:18.126028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.360 [2024-07-15 19:31:18.126065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.360 passed 00:09:27.360 Test: blockdev nvme admin passthru ...passed 00:09:27.360 Test: blockdev copy ...passed 00:09:27.360 Suite: bdevio tests on: Nvme2n1 00:09:27.360 Test: blockdev write read block ...passed 00:09:27.360 Test: blockdev write zeroes read block ...passed 00:09:27.360 Test: blockdev write zeroes read no split ...passed 00:09:27.619 Test: blockdev write zeroes read split ...passed 00:09:27.619 Test: blockdev write zeroes read split partial ...passed 00:09:27.619 Test: blockdev reset ...[2024-07-15 19:31:18.238924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:27.619 [2024-07-15 19:31:18.243813] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.619 passed 00:09:27.619 Test: blockdev write read 8 blocks ...passed 00:09:27.619 Test: blockdev write read size > 128k ...passed 00:09:27.619 Test: blockdev write read invalid size ...passed 00:09:27.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.619 Test: blockdev write read max offset ...passed 00:09:27.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.619 Test: blockdev writev readv 8 blocks ...passed 00:09:27.619 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.619 Test: blockdev writev readv block ...passed 00:09:27.619 Test: blockdev writev readv size > 128k ...passed 00:09:27.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.619 Test: blockdev comparev and writev ...[2024-07-15 19:31:18.252929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x275a34000 len:0x1000 00:09:27.619 [2024-07-15 19:31:18.252997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.619 passed 00:09:27.619 Test: blockdev nvme passthru rw ...passed 00:09:27.619 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.619 Test: blockdev nvme admin passthru ...[2024-07-15 19:31:18.253851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.619 [2024-07-15 19:31:18.253895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.619 passed 00:09:27.619 Test: blockdev copy ...passed 00:09:27.619 Suite: bdevio tests on: Nvme1n1 00:09:27.619 Test: blockdev write read block ...passed 00:09:27.619 Test: blockdev write zeroes read block ...passed 00:09:27.619 Test: blockdev write zeroes read no split ...passed 00:09:27.619 Test: blockdev write zeroes read split ...passed 00:09:27.619 Test: blockdev write zeroes read split partial ...passed 00:09:27.619 Test: blockdev reset ...[2024-07-15 19:31:18.366229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:27.619 passed 00:09:27.619 Test: blockdev write read 8 blocks ...[2024-07-15 19:31:18.370581] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.619 passed 00:09:27.619 Test: blockdev write read size > 128k ...passed 00:09:27.619 Test: blockdev write read invalid size ...passed 00:09:27.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.619 Test: blockdev write read max offset ...passed 00:09:27.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.619 Test: blockdev writev readv 8 blocks ...passed 00:09:27.619 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.619 Test: blockdev writev readv block ...passed 00:09:27.619 Test: blockdev writev readv size > 128k ...passed 00:09:27.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.619 Test: blockdev comparev and writev ...[2024-07-15 19:31:18.379019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x275a30000 len:0x1000 00:09:27.619 [2024-07-15 19:31:18.379091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:27.619 passed 00:09:27.619 Test: blockdev nvme passthru rw ...passed 00:09:27.619 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.619 Test: blockdev nvme admin passthru ...[2024-07-15 19:31:18.379824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:27.619 [2024-07-15 19:31:18.379867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:27.619 passed 00:09:27.619 Test: blockdev copy ...passed 00:09:27.619 Suite: bdevio tests on: Nvme0n1 00:09:27.619 Test: blockdev write read block ...passed 00:09:27.619 Test: blockdev write zeroes read block ...passed 00:09:27.619 Test: blockdev write zeroes read no split ...passed 00:09:27.878 Test: blockdev write zeroes read split ...passed 00:09:27.878 Test: blockdev write zeroes read split partial ...passed 00:09:27.878 Test: blockdev reset ...[2024-07-15 19:31:18.478111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:27.878 [2024-07-15 19:31:18.482518] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.878 passed 00:09:27.878 Test: blockdev write read 8 blocks ...passed 00:09:27.878 Test: blockdev write read size > 128k ...passed 00:09:27.878 Test: blockdev write read invalid size ...passed 00:09:27.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.878 Test: blockdev write read max offset ...passed 00:09:27.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.878 Test: blockdev writev readv 8 blocks ...passed 00:09:27.878 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.878 Test: blockdev writev readv block ...passed 00:09:27.878 Test: blockdev writev readv size > 128k ...passed 00:09:27.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.878 Test: blockdev comparev and writev ...passed 00:09:27.878 Test: blockdev nvme passthru rw ...[2024-07-15 19:31:18.489633] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:27.878 separate metadata which is not supported yet. 00:09:27.878 passed 00:09:27.878 Test: blockdev nvme passthru vendor specific ...passed 00:09:27.878 Test: blockdev nvme admin passthru ...[2024-07-15 19:31:18.490064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:27.878 [2024-07-15 19:31:18.490124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:27.878 passed 00:09:27.878 Test: blockdev copy ...passed 00:09:27.878 00:09:27.878 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.878 suites 6 6 n/a 0 0 00:09:27.878 tests 138 138 138 0 0 00:09:27.878 asserts 893 893 893 0 n/a 00:09:27.878 00:09:27.878 Elapsed time = 2.055 seconds 00:09:27.878 0 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66957 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66957 ']' 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66957 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66957 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66957' 00:09:27.878 killing process with pid 66957 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66957 00:09:27.878 19:31:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66957 00:09:29.250 19:31:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:29.250 00:09:29.250 real 0m3.589s 00:09:29.250 user 0m8.854s 00:09:29.250 sys 0m0.448s 00:09:29.250 19:31:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.250 19:31:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:29.250 ************************************ 00:09:29.250 END TEST bdev_bounds 00:09:29.250 ************************************ 00:09:29.250 19:31:19 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:29.250 19:31:19 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:29.250 19:31:19 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:29.250 19:31:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.250 19:31:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.250 ************************************ 00:09:29.250 START TEST bdev_nbd 00:09:29.250 ************************************ 00:09:29.250 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:29.250 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=67022 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 67022 /var/tmp/spdk-nbd.sock 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 67022 ']' 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:29.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.251 19:31:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:29.508 [2024-07-15 19:31:20.042835] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:29.508 [2024-07-15 19:31:20.043200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.508 [2024-07-15 19:31:20.222147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.765 [2024-07-15 19:31:20.494410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:30.712 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:30.969 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.970 1+0 records in 00:09:30.970 1+0 records out 00:09:30.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618572 s, 6.6 MB/s 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:30.970 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.227 1+0 records in 00:09:31.227 1+0 records out 00:09:31.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659933 s, 6.2 MB/s 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:31.227 19:31:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.501 1+0 records in 00:09:31.501 1+0 records out 00:09:31.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640415 s, 6.4 MB/s 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:31.501 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.067 1+0 records in 00:09:32.067 1+0 records out 00:09:32.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508645 s, 8.1 MB/s 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.067 1+0 records in 00:09:32.067 1+0 records out 00:09:32.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000925886 s, 4.4 MB/s 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:32.067 19:31:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:32.633 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.633 1+0 records in 00:09:32.633 1+0 records out 00:09:32.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540318 s, 7.6 MB/s 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:32.634 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd0", 00:09:32.892 "bdev_name": "Nvme0n1" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd1", 00:09:32.892 "bdev_name": "Nvme1n1" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd2", 00:09:32.892 "bdev_name": "Nvme2n1" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd3", 00:09:32.892 "bdev_name": "Nvme2n2" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd4", 00:09:32.892 "bdev_name": "Nvme2n3" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd5", 00:09:32.892 "bdev_name": "Nvme3n1" 00:09:32.892 } 00:09:32.892 ]' 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd0", 00:09:32.892 "bdev_name": "Nvme0n1" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd1", 00:09:32.892 "bdev_name": "Nvme1n1" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd2", 00:09:32.892 "bdev_name": "Nvme2n1" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd3", 00:09:32.892 "bdev_name": "Nvme2n2" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd4", 00:09:32.892 "bdev_name": "Nvme2n3" 00:09:32.892 }, 00:09:32.892 { 00:09:32.892 "nbd_device": "/dev/nbd5", 00:09:32.892 "bdev_name": "Nvme3n1" 00:09:32.892 } 00:09:32.892 ]' 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.892 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.205 19:31:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.484 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:33.748 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.749 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.749 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.749 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.006 19:31:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.265 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:34.524 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:34.782 /dev/nbd0 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.782 1+0 records in 00:09:34.782 1+0 records out 00:09:34.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436208 s, 9.4 MB/s 00:09:34.782 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.040 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.040 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.040 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.041 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.041 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.041 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.041 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:35.298 /dev/nbd1 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.298 1+0 records in 00:09:35.298 1+0 records out 00:09:35.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540632 s, 7.6 MB/s 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.298 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.299 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.299 19:31:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:35.556 /dev/nbd10 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.556 1+0 records in 00:09:35.556 1+0 records out 00:09:35.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549229 s, 7.5 MB/s 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.556 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:35.813 /dev/nbd11 00:09:35.813 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:35.813 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.814 1+0 records in 00:09:35.814 1+0 records out 00:09:35.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567914 s, 7.2 MB/s 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:35.814 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:36.072 /dev/nbd12 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.072 1+0 records in 00:09:36.072 1+0 records out 00:09:36.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494346 s, 8.3 MB/s 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:36.072 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:36.330 /dev/nbd13 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.330 1+0 records in 00:09:36.330 1+0 records out 00:09:36.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071314 s, 5.7 MB/s 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.330 19:31:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd0", 00:09:36.590 "bdev_name": "Nvme0n1" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd1", 00:09:36.590 "bdev_name": "Nvme1n1" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd10", 00:09:36.590 "bdev_name": "Nvme2n1" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd11", 00:09:36.590 "bdev_name": "Nvme2n2" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd12", 00:09:36.590 "bdev_name": "Nvme2n3" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd13", 00:09:36.590 "bdev_name": "Nvme3n1" 00:09:36.590 } 00:09:36.590 ]' 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd0", 00:09:36.590 "bdev_name": "Nvme0n1" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd1", 00:09:36.590 "bdev_name": "Nvme1n1" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd10", 00:09:36.590 "bdev_name": "Nvme2n1" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd11", 00:09:36.590 "bdev_name": "Nvme2n2" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd12", 00:09:36.590 "bdev_name": "Nvme2n3" 00:09:36.590 }, 00:09:36.590 { 00:09:36.590 "nbd_device": "/dev/nbd13", 00:09:36.590 "bdev_name": "Nvme3n1" 00:09:36.590 } 00:09:36.590 ]' 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:36.590 /dev/nbd1 00:09:36.590 /dev/nbd10 00:09:36.590 /dev/nbd11 00:09:36.590 /dev/nbd12 00:09:36.590 /dev/nbd13' 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:36.590 /dev/nbd1 00:09:36.590 /dev/nbd10 00:09:36.590 /dev/nbd11 00:09:36.590 /dev/nbd12 00:09:36.590 /dev/nbd13' 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:36.590 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:36.591 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:36.591 256+0 records in 00:09:36.591 256+0 records out 00:09:36.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00880881 s, 119 MB/s 00:09:36.591 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.591 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:36.849 256+0 records in 00:09:36.849 256+0 records out 00:09:36.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125697 s, 8.3 MB/s 00:09:36.849 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.849 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:36.849 256+0 records in 00:09:36.849 256+0 records out 00:09:36.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133364 s, 7.9 MB/s 00:09:36.849 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:36.849 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:37.106 256+0 records in 00:09:37.106 256+0 records out 00:09:37.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130497 s, 8.0 MB/s 00:09:37.107 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.107 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:37.107 256+0 records in 00:09:37.107 256+0 records out 00:09:37.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141511 s, 7.4 MB/s 00:09:37.107 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.107 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:37.365 256+0 records in 00:09:37.365 256+0 records out 00:09:37.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15161 s, 6.9 MB/s 00:09:37.365 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.365 19:31:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:37.365 256+0 records in 00:09:37.365 256+0 records out 00:09:37.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150833 s, 7.0 MB/s 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.365 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.624 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.882 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.446 19:31:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.446 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.704 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.964 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.222 19:31:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:39.222 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:39.480 malloc_lvol_verify 00:09:39.737 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:39.737 07cf3f57-b968-4052-8a91-3f7792a5af2f 00:09:39.994 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:39.994 cacac821-3661-4f4f-83dc-cb07820e1baf 00:09:39.994 19:31:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:40.252 /dev/nbd0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:40.510 mke2fs 1.46.5 (30-Dec-2021) 00:09:40.510 Discarding device blocks: 0/4096 done 00:09:40.510 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:40.510 00:09:40.510 Allocating group tables: 0/1 done 00:09:40.510 Writing inode tables: 0/1 done 00:09:40.510 Creating journal (1024 blocks): done 00:09:40.510 Writing superblocks and filesystem accounting information: 0/1 done 00:09:40.510 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 67022 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 67022 ']' 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 67022 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.510 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67022 00:09:40.768 killing process with pid 67022 00:09:40.768 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.768 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.768 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67022' 00:09:40.768 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 67022 00:09:40.768 19:31:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 67022 00:09:42.140 19:31:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:42.140 00:09:42.140 real 0m12.882s 00:09:42.140 user 0m17.191s 00:09:42.140 sys 0m4.729s 00:09:42.140 19:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.140 ************************************ 00:09:42.140 END TEST bdev_nbd 00:09:42.140 ************************************ 00:09:42.140 19:31:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:42.140 19:31:32 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:42.140 19:31:32 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:42.140 19:31:32 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:42.140 skipping fio tests on NVMe due to multi-ns failures. 00:09:42.140 19:31:32 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:42.140 19:31:32 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:42.140 19:31:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:42.140 19:31:32 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:42.140 19:31:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.140 19:31:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.140 ************************************ 00:09:42.140 START TEST bdev_verify 00:09:42.140 ************************************ 00:09:42.140 19:31:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:42.427 [2024-07-15 19:31:32.995699] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:42.427 [2024-07-15 19:31:32.995891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67428 ] 00:09:42.427 [2024-07-15 19:31:33.181801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.684 [2024-07-15 19:31:33.453809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.684 [2024-07-15 19:31:33.453834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.613 Running I/O for 5 seconds... 00:09:48.923 00:09:48.923 Latency(us) 00:09:48.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.923 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.923 Verification LBA range: start 0x0 length 0xbd0bd 00:09:48.923 Nvme0n1 : 5.07 1641.24 6.41 0.00 0.00 77809.01 8363.64 143804.71 00:09:48.923 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.923 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:48.924 Nvme0n1 : 5.06 1680.67 6.57 0.00 0.00 75918.33 13044.78 74898.29 00:09:48.924 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x0 length 0xa0000 00:09:48.924 Nvme1n1 : 5.07 1641.51 6.41 0.00 0.00 77661.33 12420.63 139810.13 00:09:48.924 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0xa0000 length 0xa0000 00:09:48.924 Nvme1n1 : 5.06 1681.52 6.57 0.00 0.00 75757.81 8738.13 67408.46 00:09:48.924 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x0 length 0x80000 00:09:48.924 Nvme2n1 : 5.07 1640.98 6.41 0.00 0.00 77516.24 12170.97 133818.27 00:09:48.924 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x80000 length 0x80000 00:09:48.924 Nvme2n1 : 5.06 1680.95 6.57 0.00 0.00 75617.58 9237.46 63413.88 00:09:48.924 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x0 length 0x80000 00:09:48.924 Nvme2n2 : 5.07 1640.09 6.41 0.00 0.00 77405.12 13606.52 137812.85 00:09:48.924 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x80000 length 0x80000 00:09:48.924 Nvme2n2 : 5.07 1678.79 6.56 0.00 0.00 75537.79 3370.42 82887.44 00:09:48.924 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x0 length 0x80000 00:09:48.924 Nvme2n3 : 5.08 1639.23 6.40 0.00 0.00 77301.37 13481.69 140808.78 00:09:48.924 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x80000 length 0x80000 00:09:48.924 Nvme2n3 : 5.07 1677.87 6.55 0.00 0.00 75408.70 4899.60 80390.83 00:09:48.924 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x0 length 0x20000 00:09:48.924 Nvme3n1 : 5.08 1638.43 6.40 0.00 0.00 77171.48 10360.93 142806.06 00:09:48.924 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:48.924 Verification LBA range: start 0x20000 length 0x20000 00:09:48.924 Nvme3n1 : 5.08 1676.98 6.55 0.00 0.00 75360.26 10485.76 80390.83 00:09:48.924 =================================================================================================================== 00:09:48.924 Total : 19918.25 77.81 0.00 0.00 76528.04 3370.42 143804.71 00:09:50.823 00:09:50.823 real 0m8.416s 00:09:50.823 user 0m15.151s 00:09:50.823 sys 0m0.355s 00:09:50.823 19:31:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.823 19:31:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:50.823 ************************************ 00:09:50.823 END TEST bdev_verify 00:09:50.823 ************************************ 00:09:50.823 19:31:41 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:50.823 19:31:41 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:50.823 19:31:41 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:50.823 19:31:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.823 19:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.823 ************************************ 00:09:50.823 START TEST bdev_verify_big_io 00:09:50.823 ************************************ 00:09:50.823 19:31:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:50.823 [2024-07-15 19:31:41.466145] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:50.823 [2024-07-15 19:31:41.466324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67537 ] 00:09:51.080 [2024-07-15 19:31:41.664947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.338 [2024-07-15 19:31:41.928251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.338 [2024-07-15 19:31:41.928274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.271 Running I/O for 5 seconds... 00:09:58.834 00:09:58.834 Latency(us) 00:09:58.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.834 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x0 length 0xbd0b 00:09:58.834 Nvme0n1 : 5.57 137.87 8.62 0.00 0.00 892493.21 17850.76 970681.78 00:09:58.834 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:58.834 Nvme0n1 : 5.66 135.69 8.48 0.00 0.00 917276.85 30084.14 954703.48 00:09:58.834 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x0 length 0xa000 00:09:58.834 Nvme1n1 : 5.67 139.79 8.74 0.00 0.00 857689.08 89877.94 810898.77 00:09:58.834 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0xa000 length 0xa000 00:09:58.834 Nvme1n1 : 5.66 135.64 8.48 0.00 0.00 893532.65 71403.03 798915.05 00:09:58.834 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x0 length 0x8000 00:09:58.834 Nvme2n1 : 5.82 135.28 8.45 0.00 0.00 853708.02 68407.10 1446036.24 00:09:58.834 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x8000 length 0x8000 00:09:58.834 Nvme2n1 : 5.66 135.57 8.47 0.00 0.00 868450.82 86882.01 762963.87 00:09:58.834 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x0 length 0x8000 00:09:58.834 Nvme2n2 : 5.85 140.07 8.75 0.00 0.00 808963.37 89877.94 1462014.54 00:09:58.834 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x8000 length 0x8000 00:09:58.834 Nvme2n2 : 5.74 138.51 8.66 0.00 0.00 825731.15 71403.03 774947.60 00:09:58.834 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x0 length 0x8000 00:09:58.834 Nvme2n3 : 5.87 149.33 9.33 0.00 0.00 742082.77 14854.83 1493971.14 00:09:58.834 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.834 Verification LBA range: start 0x8000 length 0x8000 00:09:58.834 Nvme2n3 : 5.83 149.10 9.32 0.00 0.00 750560.60 42442.36 778942.17 00:09:58.835 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:58.835 Verification LBA range: start 0x0 length 0x2000 00:09:58.835 Nvme3n1 : 5.88 160.29 10.02 0.00 0.00 671930.45 8051.57 1509949.44 00:09:58.835 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:58.835 Verification LBA range: start 0x2000 length 0x2000 00:09:58.835 Nvme3n1 : 5.84 158.92 9.93 0.00 0.00 685860.32 1014.25 814893.35 00:09:58.835 =================================================================================================================== 00:09:58.835 Total : 1716.05 107.25 0.00 0.00 808183.41 1014.25 1509949.44 00:10:00.734 00:10:00.734 real 0m9.673s 00:10:00.734 user 0m17.671s 00:10:00.734 sys 0m0.397s 00:10:00.734 19:31:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.734 19:31:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:00.734 ************************************ 00:10:00.734 END TEST bdev_verify_big_io 00:10:00.734 ************************************ 00:10:00.734 19:31:51 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:00.734 19:31:51 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.734 19:31:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:00.734 19:31:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.734 19:31:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.734 ************************************ 00:10:00.734 START TEST bdev_write_zeroes 00:10:00.734 ************************************ 00:10:00.734 19:31:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.734 [2024-07-15 19:31:51.198110] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:00.734 [2024-07-15 19:31:51.198295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67663 ] 00:10:00.734 [2024-07-15 19:31:51.392036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.993 [2024-07-15 19:31:51.681689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.927 Running I/O for 1 seconds... 00:10:02.879 00:10:02.879 Latency(us) 00:10:02.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.879 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.879 Nvme0n1 : 1.01 8578.19 33.51 0.00 0.00 14861.51 12295.80 24217.11 00:10:02.879 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.879 Nvme1n1 : 1.02 8564.60 33.46 0.00 0.00 14862.23 12483.05 25839.91 00:10:02.879 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.879 Nvme2n1 : 1.02 8578.97 33.51 0.00 0.00 14816.58 11609.23 24341.94 00:10:02.879 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.879 Nvme2n2 : 1.03 8604.23 33.61 0.00 0.00 14688.00 7458.62 23343.30 00:10:02.879 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.879 Nvme2n3 : 1.03 8591.33 33.56 0.00 0.00 14658.19 7084.13 23468.13 00:10:02.879 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:02.879 Nvme3n1 : 1.03 8516.18 33.27 0.00 0.00 14750.91 7957.94 23468.13 00:10:02.879 =================================================================================================================== 00:10:02.879 Total : 51433.50 200.91 0.00 0.00 14772.44 7084.13 25839.91 00:10:04.779 00:10:04.779 real 0m4.038s 00:10:04.779 user 0m3.603s 00:10:04.779 sys 0m0.310s 00:10:04.779 19:31:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.779 ************************************ 00:10:04.779 END TEST bdev_write_zeroes 00:10:04.779 ************************************ 00:10:04.779 19:31:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:04.779 19:31:55 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:04.779 19:31:55 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:04.779 19:31:55 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:04.779 19:31:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.779 19:31:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.779 ************************************ 00:10:04.779 START TEST bdev_json_nonenclosed 00:10:04.779 ************************************ 00:10:04.779 19:31:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:04.779 [2024-07-15 19:31:55.267301] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:04.779 [2024-07-15 19:31:55.267450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67723 ] 00:10:04.779 [2024-07-15 19:31:55.440345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.038 [2024-07-15 19:31:55.705796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.038 [2024-07-15 19:31:55.705913] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:05.038 [2024-07-15 19:31:55.705938] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:05.038 [2024-07-15 19:31:55.705956] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:05.637 00:10:05.637 real 0m1.064s 00:10:05.637 user 0m0.789s 00:10:05.637 sys 0m0.168s 00:10:05.637 19:31:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:10:05.637 19:31:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.637 19:31:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:05.637 ************************************ 00:10:05.637 END TEST bdev_json_nonenclosed 00:10:05.637 ************************************ 00:10:05.637 19:31:56 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:10:05.637 19:31:56 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:10:05.637 19:31:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.637 19:31:56 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:05.637 19:31:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.637 19:31:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.637 ************************************ 00:10:05.637 START TEST bdev_json_nonarray 00:10:05.637 ************************************ 00:10:05.637 19:31:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.637 [2024-07-15 19:31:56.394392] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:05.637 [2024-07-15 19:31:56.394550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67757 ] 00:10:05.900 [2024-07-15 19:31:56.559987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.158 [2024-07-15 19:31:56.869531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.158 [2024-07-15 19:31:56.869642] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:06.158 [2024-07-15 19:31:56.869665] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:06.158 [2024-07-15 19:31:56.869684] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:06.723 00:10:06.723 real 0m1.105s 00:10:06.723 user 0m0.843s 00:10:06.723 sys 0m0.153s 00:10:06.723 ************************************ 00:10:06.723 END TEST bdev_json_nonarray 00:10:06.723 ************************************ 00:10:06.723 19:31:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:10:06.723 19:31:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.723 19:31:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:06.723 19:31:57 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:06.723 19:31:57 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:06.723 ************************************ 00:10:06.723 END TEST blockdev_nvme 00:10:06.723 ************************************ 00:10:06.723 00:10:06.723 real 0m49.471s 00:10:06.723 user 1m12.118s 00:10:06.723 sys 0m7.840s 00:10:06.723 19:31:57 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.723 19:31:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.723 19:31:57 -- common/autotest_common.sh@1142 -- # return 0 00:10:06.723 19:31:57 -- spdk/autotest.sh@213 -- # uname -s 00:10:06.723 19:31:57 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:10:06.723 19:31:57 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:06.723 19:31:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:06.723 19:31:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.723 19:31:57 -- common/autotest_common.sh@10 -- # set +x 00:10:06.723 ************************************ 00:10:06.723 START TEST blockdev_nvme_gpt 00:10:06.723 ************************************ 00:10:06.723 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:06.981 * Looking for test storage... 00:10:06.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:10:06.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67834 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67834 00:10:06.981 19:31:57 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:06.981 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67834 ']' 00:10:06.981 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.981 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.981 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.981 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.981 19:31:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:06.981 [2024-07-15 19:31:57.741152] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:06.981 [2024-07-15 19:31:57.741291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67834 ] 00:10:07.238 [2024-07-15 19:31:57.906676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.495 [2024-07-15 19:31:58.165108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.867 19:31:59 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.867 19:31:59 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:10:08.867 19:31:59 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:10:08.867 19:31:59 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:10:08.867 19:31:59 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:08.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.124 Waiting for block devices as requested 00:10:09.124 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.396 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.396 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.396 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.653 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:14.653 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:14.653 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:10:14.654 BYT; 00:10:14.654 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:10:14.654 BYT; 00:10:14.654 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:14.654 19:32:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:14.654 19:32:05 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:10:15.588 The operation has completed successfully. 00:10:15.588 19:32:06 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:10:16.966 The operation has completed successfully. 00:10:16.966 19:32:07 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.791 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.791 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.791 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.048 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.048 19:32:08 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:10:18.048 19:32:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.048 19:32:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.048 [] 00:10:18.048 19:32:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.048 19:32:08 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:10:18.048 19:32:08 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:18.048 19:32:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:18.049 19:32:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:18.049 19:32:08 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:18.049 19:32:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.049 19:32:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.307 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.307 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.566 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.566 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:10:18.567 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9791399b-2ac6-4fa2-83c6-ef969bddf489"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9791399b-2ac6-4fa2-83c6-ef969bddf489",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1d6d3c13-15b3-42e1-aed0-a08dddfc38a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d6d3c13-15b3-42e1-aed0-a08dddfc38a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9995d3c6-29ec-49ff-8e59-d8223ebfe8ac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9995d3c6-29ec-49ff-8e59-d8223ebfe8ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9956b8a2-81e2-4e6f-a781-cfa5b91b5524"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9956b8a2-81e2-4e6f-a781-cfa5b91b5524",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bc98ea04-eccc-4fd3-9a5e-3ec4f485d77c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bc98ea04-eccc-4fd3-9a5e-3ec4f485d77c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:18.567 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:10:18.825 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:10:18.825 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:10:18.825 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:10:18.825 19:32:09 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 67834 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67834 ']' 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67834 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67834 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67834' 00:10:18.825 killing process with pid 67834 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67834 00:10:18.825 19:32:09 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67834 00:10:22.127 19:32:12 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:22.127 19:32:12 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:22.127 19:32:12 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:22.127 19:32:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.127 19:32:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:22.127 ************************************ 00:10:22.127 START TEST bdev_hello_world 00:10:22.127 ************************************ 00:10:22.127 19:32:12 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:22.127 [2024-07-15 19:32:12.279479] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:22.127 [2024-07-15 19:32:12.279608] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68474 ] 00:10:22.127 [2024-07-15 19:32:12.444172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.127 [2024-07-15 19:32:12.704260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.692 [2024-07-15 19:32:13.447082] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:22.692 [2024-07-15 19:32:13.447133] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:22.692 [2024-07-15 19:32:13.447161] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:22.692 [2024-07-15 19:32:13.450542] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:22.692 [2024-07-15 19:32:13.451070] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:22.692 [2024-07-15 19:32:13.451107] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:22.692 [2024-07-15 19:32:13.451365] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:22.692 00:10:22.692 [2024-07-15 19:32:13.451408] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:24.595 00:10:24.595 real 0m2.731s 00:10:24.595 user 0m2.354s 00:10:24.595 sys 0m0.263s 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 ************************************ 00:10:24.595 END TEST bdev_hello_world 00:10:24.595 ************************************ 00:10:24.595 19:32:14 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:24.595 19:32:14 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:10:24.595 19:32:14 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:24.595 19:32:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.595 19:32:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 ************************************ 00:10:24.595 START TEST bdev_bounds 00:10:24.595 ************************************ 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:10:24.595 Process bdevio pid: 68522 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68522 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68522' 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68522 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68522 ']' 00:10:24.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.595 19:32:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 [2024-07-15 19:32:15.112671] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:24.595 [2024-07-15 19:32:15.112859] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68522 ] 00:10:24.595 [2024-07-15 19:32:15.296867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.866 [2024-07-15 19:32:15.560612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.866 [2024-07-15 19:32:15.560753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.866 [2024-07-15 19:32:15.560851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.801 19:32:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.801 19:32:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:10:25.801 19:32:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:25.801 I/O targets: 00:10:25.801 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:25.801 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:25.801 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:25.801 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:25.801 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:25.801 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:25.801 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:25.801 00:10:25.801 00:10:25.801 CUnit - A unit testing framework for C - Version 2.1-3 00:10:25.801 http://cunit.sourceforge.net/ 00:10:25.801 00:10:25.801 00:10:25.801 Suite: bdevio tests on: Nvme3n1 00:10:25.801 Test: blockdev write read block ...passed 00:10:25.801 Test: blockdev write zeroes read block ...passed 00:10:25.801 Test: blockdev write zeroes read no split ...passed 00:10:25.801 Test: blockdev write zeroes read split ...passed 00:10:25.801 Test: blockdev write zeroes read split partial ...passed 00:10:25.801 Test: blockdev reset ...[2024-07-15 19:32:16.580090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:25.801 [2024-07-15 19:32:16.584557] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:25.801 passed 00:10:25.801 Test: blockdev write read 8 blocks ...passed 00:10:25.801 Test: blockdev write read size > 128k ...passed 00:10:25.801 Test: blockdev write read invalid size ...passed 00:10:25.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:25.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:25.801 Test: blockdev write read max offset ...passed 00:10:25.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:25.801 Test: blockdev writev readv 8 blocks ...passed 00:10:25.801 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.060 Test: blockdev writev readv block ...passed 00:10:26.060 Test: blockdev writev readv size > 128k ...passed 00:10:26.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.060 Test: blockdev comparev and writev ...[2024-07-15 19:32:16.594818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f004000 len:0x1000 00:10:26.060 [2024-07-15 19:32:16.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.060 passed 00:10:26.060 Test: blockdev nvme passthru rw ...passed 00:10:26.060 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.060 Test: blockdev nvme admin passthru ...[2024-07-15 19:32:16.595563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.060 [2024-07-15 19:32:16.595608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.060 passed 00:10:26.060 Test: blockdev copy ...passed 00:10:26.060 Suite: bdevio tests on: Nvme2n3 00:10:26.060 Test: blockdev write read block ...passed 00:10:26.060 Test: blockdev write zeroes read block ...passed 00:10:26.060 Test: blockdev write zeroes read no split ...passed 00:10:26.060 Test: blockdev write zeroes read split ...passed 00:10:26.060 Test: blockdev write zeroes read split partial ...passed 00:10:26.060 Test: blockdev reset ...[2024-07-15 19:32:16.708058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:26.060 [2024-07-15 19:32:16.712495] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.060 passed 00:10:26.060 Test: blockdev write read 8 blocks ...passed 00:10:26.060 Test: blockdev write read size > 128k ...passed 00:10:26.060 Test: blockdev write read invalid size ...passed 00:10:26.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.060 Test: blockdev write read max offset ...passed 00:10:26.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.060 Test: blockdev writev readv 8 blocks ...passed 00:10:26.060 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.060 Test: blockdev writev readv block ...passed 00:10:26.060 Test: blockdev writev readv size > 128k ...passed 00:10:26.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.060 Test: blockdev comparev and writev ...[2024-07-15 19:32:16.721708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e63a000 len:0x1000 00:10:26.060 [2024-07-15 19:32:16.721767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.060 passed 00:10:26.060 Test: blockdev nvme passthru rw ...passed 00:10:26.060 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.060 Test: blockdev nvme admin passthru ...[2024-07-15 19:32:16.722416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.060 [2024-07-15 19:32:16.722476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.060 passed 00:10:26.060 Test: blockdev copy ...passed 00:10:26.060 Suite: bdevio tests on: Nvme2n2 00:10:26.060 Test: blockdev write read block ...passed 00:10:26.060 Test: blockdev write zeroes read block ...passed 00:10:26.060 Test: blockdev write zeroes read no split ...passed 00:10:26.060 Test: blockdev write zeroes read split ...passed 00:10:26.060 Test: blockdev write zeroes read split partial ...passed 00:10:26.060 Test: blockdev reset ...[2024-07-15 19:32:16.834454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:26.060 [2024-07-15 19:32:16.839360] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.060 passed 00:10:26.061 Test: blockdev write read 8 blocks ...passed 00:10:26.061 Test: blockdev write read size > 128k ...passed 00:10:26.061 Test: blockdev write read invalid size ...passed 00:10:26.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.061 Test: blockdev write read max offset ...passed 00:10:26.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.061 Test: blockdev writev readv 8 blocks ...passed 00:10:26.061 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.061 Test: blockdev writev readv block ...passed 00:10:26.061 Test: blockdev writev readv size > 128k ...passed 00:10:26.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.319 Test: blockdev comparev and writev ...[2024-07-15 19:32:16.850724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e636000 len:0x1000 00:10:26.319 [2024-07-15 19:32:16.850989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.319 passed 00:10:26.319 Test: blockdev nvme passthru rw ...passed 00:10:26.319 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.319 Test: blockdev nvme admin passthru ...[2024-07-15 19:32:16.852234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.319 [2024-07-15 19:32:16.852287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.319 passed 00:10:26.319 Test: blockdev copy ...passed 00:10:26.319 Suite: bdevio tests on: Nvme2n1 00:10:26.319 Test: blockdev write read block ...passed 00:10:26.319 Test: blockdev write zeroes read block ...passed 00:10:26.319 Test: blockdev write zeroes read no split ...passed 00:10:26.319 Test: blockdev write zeroes read split ...passed 00:10:26.319 Test: blockdev write zeroes read split partial ...passed 00:10:26.319 Test: blockdev reset ...[2024-07-15 19:32:16.943507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:26.319 [2024-07-15 19:32:16.948624] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.319 passed 00:10:26.319 Test: blockdev write read 8 blocks ...passed 00:10:26.319 Test: blockdev write read size > 128k ...passed 00:10:26.319 Test: blockdev write read invalid size ...passed 00:10:26.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.319 Test: blockdev write read max offset ...passed 00:10:26.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.319 Test: blockdev writev readv 8 blocks ...passed 00:10:26.319 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.320 Test: blockdev writev readv block ...passed 00:10:26.320 Test: blockdev writev readv size > 128k ...passed 00:10:26.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.320 Test: blockdev comparev and writev ...[2024-07-15 19:32:16.959092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e630000 len:0x1000 00:10:26.320 [2024-07-15 19:32:16.959325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.320 passed 00:10:26.320 Test: blockdev nvme passthru rw ...passed 00:10:26.320 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:32:16.960436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.320 passed 00:10:26.320 Test: blockdev nvme admin passthru ...[2024-07-15 19:32:16.960479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.320 passed 00:10:26.320 Test: blockdev copy ...passed 00:10:26.320 Suite: bdevio tests on: Nvme1n1 00:10:26.320 Test: blockdev write read block ...passed 00:10:26.320 Test: blockdev write zeroes read block ...passed 00:10:26.320 Test: blockdev write zeroes read no split ...passed 00:10:26.320 Test: blockdev write zeroes read split ...passed 00:10:26.320 Test: blockdev write zeroes read split partial ...passed 00:10:26.320 Test: blockdev reset ...[2024-07-15 19:32:17.050727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:26.320 [2024-07-15 19:32:17.055540] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.320 passed 00:10:26.320 Test: blockdev write read 8 blocks ...passed 00:10:26.320 Test: blockdev write read size > 128k ...passed 00:10:26.320 Test: blockdev write read invalid size ...passed 00:10:26.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.320 Test: blockdev write read max offset ...passed 00:10:26.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.320 Test: blockdev writev readv 8 blocks ...passed 00:10:26.320 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.320 Test: blockdev writev readv block ...passed 00:10:26.320 Test: blockdev writev readv size > 128k ...passed 00:10:26.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.320 Test: blockdev comparev and writev ...[2024-07-15 19:32:17.065574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x275c0e000 len:0x1000 00:10:26.320 [2024-07-15 19:32:17.065830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.320 passed 00:10:26.320 Test: blockdev nvme passthru rw ...passed 00:10:26.320 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.320 Test: blockdev nvme admin passthru ...[2024-07-15 19:32:17.066938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.320 [2024-07-15 19:32:17.066985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.320 passed 00:10:26.320 Test: blockdev copy ...passed 00:10:26.320 Suite: bdevio tests on: Nvme0n1p2 00:10:26.320 Test: blockdev write read block ...passed 00:10:26.320 Test: blockdev write zeroes read block ...passed 00:10:26.320 Test: blockdev write zeroes read no split ...passed 00:10:26.578 Test: blockdev write zeroes read split ...passed 00:10:26.578 Test: blockdev write zeroes read split partial ...passed 00:10:26.578 Test: blockdev reset ...[2024-07-15 19:32:17.156583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:26.578 [2024-07-15 19:32:17.161022] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.578 passed 00:10:26.578 Test: blockdev write read 8 blocks ...passed 00:10:26.579 Test: blockdev write read size > 128k ...passed 00:10:26.579 Test: blockdev write read invalid size ...passed 00:10:26.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.579 Test: blockdev write read max offset ...passed 00:10:26.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.579 Test: blockdev writev readv 8 blocks ...passed 00:10:26.579 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.579 Test: blockdev writev readv block ...passed 00:10:26.579 Test: blockdev writev readv size > 128k ...passed 00:10:26.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.579 Test: blockdev comparev and writev ...passed 00:10:26.579 Test: blockdev nvme passthru rw ...passed 00:10:26.579 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.579 Test: blockdev nvme admin passthru ...passed 00:10:26.579 Test: blockdev copy ...[2024-07-15 19:32:17.168723] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:26.579 separate metadata which is not supported yet. 00:10:26.579 passed 00:10:26.579 Suite: bdevio tests on: Nvme0n1p1 00:10:26.579 Test: blockdev write read block ...passed 00:10:26.579 Test: blockdev write zeroes read block ...passed 00:10:26.579 Test: blockdev write zeroes read no split ...passed 00:10:26.579 Test: blockdev write zeroes read split ...passed 00:10:26.579 Test: blockdev write zeroes read split partial ...passed 00:10:26.579 Test: blockdev reset ...[2024-07-15 19:32:17.245599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:26.579 [2024-07-15 19:32:17.250133] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.579 passed 00:10:26.579 Test: blockdev write read 8 blocks ...passed 00:10:26.579 Test: blockdev write read size > 128k ...passed 00:10:26.579 Test: blockdev write read invalid size ...passed 00:10:26.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.579 Test: blockdev write read max offset ...passed 00:10:26.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.579 Test: blockdev writev readv 8 blocks ...passed 00:10:26.579 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.579 Test: blockdev writev readv block ...passed 00:10:26.579 Test: blockdev writev readv size > 128k ...passed 00:10:26.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.579 Test: blockdev comparev and writev ...passed 00:10:26.579 Test: blockdev nvme passthru rw ...passed 00:10:26.579 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.579 Test: blockdev nvme admin passthru ...passed 00:10:26.579 Test: blockdev copy ...[2024-07-15 19:32:17.260452] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:26.579 separate metadata which is not supported yet. 00:10:26.579 passed 00:10:26.579 00:10:26.579 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.579 suites 7 7 n/a 0 0 00:10:26.579 tests 161 161 161 0 0 00:10:26.579 asserts 1006 1006 1006 0 n/a 00:10:26.579 00:10:26.579 Elapsed time = 2.171 seconds 00:10:26.579 0 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68522 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68522 ']' 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68522 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68522 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68522' 00:10:26.579 killing process with pid 68522 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68522 00:10:26.579 19:32:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68522 00:10:28.027 19:32:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:10:28.027 00:10:28.027 real 0m3.594s 00:10:28.027 user 0m8.947s 00:10:28.027 sys 0m0.471s 00:10:28.027 19:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:28.028 ************************************ 00:10:28.028 END TEST bdev_bounds 00:10:28.028 ************************************ 00:10:28.028 19:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:28.028 19:32:18 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:28.028 19:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:28.028 19:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.028 19:32:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.028 ************************************ 00:10:28.028 START TEST bdev_nbd 00:10:28.028 ************************************ 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=68598 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 68598 /var/tmp/spdk-nbd.sock 00:10:28.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68598 ']' 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.028 19:32:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:28.028 [2024-07-15 19:32:18.768068] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:28.028 [2024-07-15 19:32:18.768947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.287 [2024-07-15 19:32:18.970238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.547 [2024-07-15 19:32:19.232438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:29.481 19:32:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:29.739 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:29.739 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:29.739 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:29.739 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:29.739 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:29.739 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.740 1+0 records in 00:10:29.740 1+0 records out 00:10:29.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057898 s, 7.1 MB/s 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:29.740 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.998 1+0 records in 00:10:29.998 1+0 records out 00:10:29.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000754667 s, 5.4 MB/s 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:29.998 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.257 1+0 records in 00:10:30.257 1+0 records out 00:10:30.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490897 s, 8.3 MB/s 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:30.257 19:32:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.516 1+0 records in 00:10:30.516 1+0 records out 00:10:30.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630157 s, 6.5 MB/s 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:30.516 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.774 1+0 records in 00:10:30.774 1+0 records out 00:10:30.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105097 s, 3.9 MB/s 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:30.774 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.105 1+0 records in 00:10:31.105 1+0 records out 00:10:31.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103031 s, 4.0 MB/s 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:31.105 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.362 1+0 records in 00:10:31.362 1+0 records out 00:10:31.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105056 s, 3.9 MB/s 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:31.362 19:32:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd0", 00:10:31.619 "bdev_name": "Nvme0n1p1" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd1", 00:10:31.619 "bdev_name": "Nvme0n1p2" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd2", 00:10:31.619 "bdev_name": "Nvme1n1" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd3", 00:10:31.619 "bdev_name": "Nvme2n1" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd4", 00:10:31.619 "bdev_name": "Nvme2n2" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd5", 00:10:31.619 "bdev_name": "Nvme2n3" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd6", 00:10:31.619 "bdev_name": "Nvme3n1" 00:10:31.619 } 00:10:31.619 ]' 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd0", 00:10:31.619 "bdev_name": "Nvme0n1p1" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd1", 00:10:31.619 "bdev_name": "Nvme0n1p2" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd2", 00:10:31.619 "bdev_name": "Nvme1n1" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd3", 00:10:31.619 "bdev_name": "Nvme2n1" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd4", 00:10:31.619 "bdev_name": "Nvme2n2" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd5", 00:10:31.619 "bdev_name": "Nvme2n3" 00:10:31.619 }, 00:10:31.619 { 00:10:31.619 "nbd_device": "/dev/nbd6", 00:10:31.619 "bdev_name": "Nvme3n1" 00:10:31.619 } 00:10:31.619 ]' 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.619 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.875 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.132 19:32:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.698 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.956 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.213 19:32:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:33.778 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.035 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:34.292 /dev/nbd0 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.292 1+0 records in 00:10:34.292 1+0 records out 00:10:34.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000859129 s, 4.8 MB/s 00:10:34.292 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.293 19:32:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:34.551 /dev/nbd1 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.551 1+0 records in 00:10:34.551 1+0 records out 00:10:34.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834584 s, 4.9 MB/s 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.551 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:34.810 /dev/nbd10 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.810 1+0 records in 00:10:34.810 1+0 records out 00:10:34.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589009 s, 7.0 MB/s 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.810 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:35.069 /dev/nbd11 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.069 1+0 records in 00:10:35.069 1+0 records out 00:10:35.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699835 s, 5.9 MB/s 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:35.069 19:32:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:35.635 /dev/nbd12 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.635 1+0 records in 00:10:35.635 1+0 records out 00:10:35.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950235 s, 4.3 MB/s 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:35.635 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:35.892 /dev/nbd13 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.892 1+0 records in 00:10:35.892 1+0 records out 00:10:35.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887073 s, 4.6 MB/s 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:35.892 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:36.149 /dev/nbd14 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.149 1+0 records in 00:10:36.149 1+0 records out 00:10:36.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701971 s, 5.8 MB/s 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.149 19:32:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd0", 00:10:36.407 "bdev_name": "Nvme0n1p1" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd1", 00:10:36.407 "bdev_name": "Nvme0n1p2" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd10", 00:10:36.407 "bdev_name": "Nvme1n1" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd11", 00:10:36.407 "bdev_name": "Nvme2n1" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd12", 00:10:36.407 "bdev_name": "Nvme2n2" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd13", 00:10:36.407 "bdev_name": "Nvme2n3" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd14", 00:10:36.407 "bdev_name": "Nvme3n1" 00:10:36.407 } 00:10:36.407 ]' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd0", 00:10:36.407 "bdev_name": "Nvme0n1p1" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd1", 00:10:36.407 "bdev_name": "Nvme0n1p2" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd10", 00:10:36.407 "bdev_name": "Nvme1n1" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd11", 00:10:36.407 "bdev_name": "Nvme2n1" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd12", 00:10:36.407 "bdev_name": "Nvme2n2" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd13", 00:10:36.407 "bdev_name": "Nvme2n3" 00:10:36.407 }, 00:10:36.407 { 00:10:36.407 "nbd_device": "/dev/nbd14", 00:10:36.407 "bdev_name": "Nvme3n1" 00:10:36.407 } 00:10:36.407 ]' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:36.407 /dev/nbd1 00:10:36.407 /dev/nbd10 00:10:36.407 /dev/nbd11 00:10:36.407 /dev/nbd12 00:10:36.407 /dev/nbd13 00:10:36.407 /dev/nbd14' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:36.407 /dev/nbd1 00:10:36.407 /dev/nbd10 00:10:36.407 /dev/nbd11 00:10:36.407 /dev/nbd12 00:10:36.407 /dev/nbd13 00:10:36.407 /dev/nbd14' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:36.407 256+0 records in 00:10:36.407 256+0 records out 00:10:36.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00916541 s, 114 MB/s 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.407 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:36.665 256+0 records in 00:10:36.665 256+0 records out 00:10:36.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144365 s, 7.3 MB/s 00:10:36.665 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.665 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:36.665 256+0 records in 00:10:36.665 256+0 records out 00:10:36.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146296 s, 7.2 MB/s 00:10:36.665 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.665 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:36.923 256+0 records in 00:10:36.923 256+0 records out 00:10:36.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147815 s, 7.1 MB/s 00:10:36.923 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.923 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:37.180 256+0 records in 00:10:37.180 256+0 records out 00:10:37.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148289 s, 7.1 MB/s 00:10:37.180 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.180 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:37.180 256+0 records in 00:10:37.180 256+0 records out 00:10:37.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145934 s, 7.2 MB/s 00:10:37.180 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.180 19:32:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:37.438 256+0 records in 00:10:37.438 256+0 records out 00:10:37.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14568 s, 7.2 MB/s 00:10:37.438 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.438 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:37.438 256+0 records in 00:10:37.438 256+0 records out 00:10:37.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14602 s, 7.2 MB/s 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.439 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.696 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.954 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.211 19:32:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.469 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.726 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.984 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.242 19:32:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:39.547 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:40.113 malloc_lvol_verify 00:10:40.113 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:40.113 c27ed4fa-ae85-47ec-9974-cf983688eec1 00:10:40.113 19:32:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:40.370 ac3d8020-b5c6-411a-869e-620c248b6342 00:10:40.370 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:40.628 /dev/nbd0 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:40.628 mke2fs 1.46.5 (30-Dec-2021) 00:10:40.628 Discarding device blocks: 0/4096 done 00:10:40.628 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:40.628 00:10:40.628 Allocating group tables: 0/1 done 00:10:40.628 Writing inode tables: 0/1 done 00:10:40.628 Creating journal (1024 blocks): done 00:10:40.628 Writing superblocks and filesystem accounting information: 0/1 done 00:10:40.628 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.628 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 68598 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68598 ']' 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68598 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68598 00:10:40.887 killing process with pid 68598 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68598' 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68598 00:10:40.887 19:32:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68598 00:10:42.795 19:32:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:10:42.795 00:10:42.795 real 0m14.482s 00:10:42.795 user 0m19.031s 00:10:42.795 sys 0m5.747s 00:10:42.795 ************************************ 00:10:42.795 END TEST bdev_nbd 00:10:42.795 ************************************ 00:10:42.795 19:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.795 19:32:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:42.795 19:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:42.795 19:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:42.795 19:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:10:42.795 19:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:10:42.795 skipping fio tests on NVMe due to multi-ns failures. 00:10:42.795 19:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:42.795 19:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:42.795 19:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:42.795 19:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:42.795 19:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.795 19:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:42.795 ************************************ 00:10:42.795 START TEST bdev_verify 00:10:42.795 ************************************ 00:10:42.795 19:32:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:42.795 [2024-07-15 19:32:33.266476] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:42.795 [2024-07-15 19:32:33.266633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69042 ] 00:10:42.795 [2024-07-15 19:32:33.430666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.053 [2024-07-15 19:32:33.705598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.053 [2024-07-15 19:32:33.705627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.987 Running I/O for 5 seconds... 00:10:49.256 00:10:49.256 Latency(us) 00:10:49.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.256 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0x5e800 00:10:49.256 Nvme0n1p1 : 5.06 1315.81 5.14 0.00 0.00 96901.54 20846.69 95869.81 00:10:49.256 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x5e800 length 0x5e800 00:10:49.256 Nvme0n1p1 : 5.06 1214.12 4.74 0.00 0.00 105035.54 21221.18 106854.89 00:10:49.256 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0x5e7ff 00:10:49.256 Nvme0n1p2 : 5.06 1315.30 5.14 0.00 0.00 96757.22 23717.79 88379.98 00:10:49.256 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:49.256 Nvme0n1p2 : 5.06 1213.41 4.74 0.00 0.00 104863.44 23967.45 103359.63 00:10:49.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0xa0000 00:10:49.256 Nvme1n1 : 5.06 1314.55 5.13 0.00 0.00 96611.17 25090.93 83886.08 00:10:49.256 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0xa0000 length 0xa0000 00:10:49.256 Nvme1n1 : 5.07 1212.75 4.74 0.00 0.00 104660.01 23592.96 100363.70 00:10:49.256 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0x80000 00:10:49.256 Nvme2n1 : 5.08 1322.05 5.16 0.00 0.00 95924.10 7365.00 80890.15 00:10:49.256 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x80000 length 0x80000 00:10:49.256 Nvme2n1 : 5.08 1220.95 4.77 0.00 0.00 103806.92 6616.02 97367.77 00:10:49.256 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0x80000 00:10:49.256 Nvme2n2 : 5.08 1321.61 5.16 0.00 0.00 95733.47 7833.11 82388.11 00:10:49.256 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x80000 length 0x80000 00:10:49.256 Nvme2n2 : 5.09 1220.35 4.77 0.00 0.00 103608.84 7302.58 99864.38 00:10:49.256 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0x80000 00:10:49.256 Nvme2n3 : 5.10 1331.11 5.20 0.00 0.00 94978.23 8238.81 83386.76 00:10:49.256 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x80000 length 0x80000 00:10:49.256 Nvme2n3 : 5.10 1230.40 4.81 0.00 0.00 102695.20 7271.38 103359.63 00:10:49.256 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x0 length 0x20000 00:10:49.256 Nvme3n1 : 5.10 1330.75 5.20 0.00 0.00 94798.59 8301.23 85883.37 00:10:49.256 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.256 Verification LBA range: start 0x20000 length 0x20000 00:10:49.256 Nvme3n1 : 5.10 1229.97 4.80 0.00 0.00 102508.11 7864.32 105856.24 00:10:49.256 =================================================================================================================== 00:10:49.256 Total : 17793.14 69.50 0.00 0.00 99755.93 6616.02 106854.89 00:10:50.629 00:10:50.629 real 0m8.172s 00:10:50.629 user 0m14.842s 00:10:50.629 sys 0m0.295s 00:10:50.629 19:32:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.629 19:32:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:50.629 ************************************ 00:10:50.629 END TEST bdev_verify 00:10:50.629 ************************************ 00:10:50.629 19:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:50.629 19:32:41 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.629 19:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:50.629 19:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.629 19:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:50.629 ************************************ 00:10:50.629 START TEST bdev_verify_big_io 00:10:50.629 ************************************ 00:10:50.629 19:32:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.888 [2024-07-15 19:32:41.528041] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:50.888 [2024-07-15 19:32:41.528212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69150 ] 00:10:51.145 [2024-07-15 19:32:41.711976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.403 [2024-07-15 19:32:41.962037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.403 [2024-07-15 19:32:41.962085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.335 Running I/O for 5 seconds... 00:10:58.965 00:10:58.965 Latency(us) 00:10:58.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.965 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0x5e80 00:10:58.965 Nvme0n1p1 : 5.80 121.55 7.60 0.00 0.00 1009320.53 23468.13 1070546.16 00:10:58.965 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x5e80 length 0x5e80 00:10:58.965 Nvme0n1p1 : 5.87 141.85 8.87 0.00 0.00 791695.28 44439.65 818887.92 00:10:58.965 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0x5e7f 00:10:58.965 Nvme0n1p2 : 5.86 126.36 7.90 0.00 0.00 954891.40 53427.44 910763.15 00:10:58.965 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:58.965 Nvme0n1p2 : 5.94 142.87 8.93 0.00 0.00 762654.15 44439.65 1070546.16 00:10:58.965 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0xa000 00:10:58.965 Nvme1n1 : 5.80 126.12 7.88 0.00 0.00 934658.47 53677.10 898779.43 00:10:58.965 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0xa000 length 0xa000 00:10:58.965 Nvme1n1 : 6.01 151.93 9.50 0.00 0.00 705500.73 1810.04 1581851.79 00:10:58.965 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0x8000 00:10:58.965 Nvme2n1 : 5.86 131.09 8.19 0.00 0.00 877610.91 52179.14 914757.73 00:10:58.965 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x8000 length 0x8000 00:10:58.965 Nvme2n1 : 5.74 129.54 8.10 0.00 0.00 946916.17 33204.91 1070546.16 00:10:58.965 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0x8000 00:10:58.965 Nvme2n2 : 5.86 130.95 8.18 0.00 0.00 852970.87 55924.05 938725.18 00:10:58.965 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x8000 length 0x8000 00:10:58.965 Nvme2n2 : 5.74 128.29 8.02 0.00 0.00 925590.71 123332.51 918752.30 00:10:58.965 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0x8000 00:10:58.965 Nvme2n3 : 6.00 138.66 8.67 0.00 0.00 784015.06 41443.72 954703.48 00:10:58.965 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x8000 length 0x8000 00:10:58.965 Nvme2n3 : 5.75 133.60 8.35 0.00 0.00 882485.80 91875.23 770953.02 00:10:58.965 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x0 length 0x2000 00:10:58.965 Nvme3n1 : 6.03 146.87 9.18 0.00 0.00 724997.95 4930.80 1126470.22 00:10:58.965 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.965 Verification LBA range: start 0x2000 length 0x2000 00:10:58.965 Nvme3n1 : 5.82 137.02 8.56 0.00 0.00 839174.36 64911.85 906768.58 00:10:58.965 =================================================================================================================== 00:10:58.965 Total : 1886.69 117.92 0.00 0.00 849943.68 1810.04 1581851.79 00:11:00.889 00:11:00.889 real 0m9.780s 00:11:00.889 user 0m17.967s 00:11:00.889 sys 0m0.355s 00:11:00.889 19:32:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.889 ************************************ 00:11:00.889 19:32:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.889 END TEST bdev_verify_big_io 00:11:00.889 ************************************ 00:11:00.889 19:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:00.889 19:32:51 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:00.889 19:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:00.889 19:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.889 19:32:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:00.889 ************************************ 00:11:00.889 START TEST bdev_write_zeroes 00:11:00.889 ************************************ 00:11:00.889 19:32:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:00.889 [2024-07-15 19:32:51.370836] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:00.889 [2024-07-15 19:32:51.371021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69270 ] 00:11:00.889 [2024-07-15 19:32:51.545311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.147 [2024-07-15 19:32:51.793931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.081 Running I/O for 1 seconds... 00:11:03.013 00:11:03.013 Latency(us) 00:11:03.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.013 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme0n1p1 : 1.02 7139.79 27.89 0.00 0.00 17875.69 11671.65 26713.72 00:11:03.013 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme0n1p2 : 1.02 7127.95 27.84 0.00 0.00 17872.93 11796.48 27712.37 00:11:03.013 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme1n1 : 1.03 7117.01 27.80 0.00 0.00 17841.59 12170.97 25340.59 00:11:03.013 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme2n1 : 1.03 7106.13 27.76 0.00 0.00 17743.61 9861.61 21595.67 00:11:03.013 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme2n2 : 1.03 7095.54 27.72 0.00 0.00 17731.59 9299.87 21720.50 00:11:03.013 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme2n3 : 1.03 7084.90 27.68 0.00 0.00 17719.82 8550.89 21720.50 00:11:03.013 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.013 Nvme3n1 : 1.03 7012.10 27.39 0.00 0.00 17875.11 13044.78 28586.18 00:11:03.013 =================================================================================================================== 00:11:03.013 Total : 49683.42 194.08 0.00 0.00 17808.54 8550.89 28586.18 00:11:04.916 00:11:04.916 real 0m3.964s 00:11:04.916 user 0m3.534s 00:11:04.916 sys 0m0.306s 00:11:04.916 19:32:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.916 19:32:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:04.916 ************************************ 00:11:04.916 END TEST bdev_write_zeroes 00:11:04.916 ************************************ 00:11:04.916 19:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:04.916 19:32:55 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.916 19:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:04.916 19:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.916 19:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.916 ************************************ 00:11:04.916 START TEST bdev_json_nonenclosed 00:11:04.916 ************************************ 00:11:04.916 19:32:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.916 [2024-07-15 19:32:55.400116] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:04.916 [2024-07-15 19:32:55.400314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69335 ] 00:11:04.916 [2024-07-15 19:32:55.587065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.175 [2024-07-15 19:32:55.839034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.175 [2024-07-15 19:32:55.839132] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:05.175 [2024-07-15 19:32:55.839153] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:05.175 [2024-07-15 19:32:55.839169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:05.742 00:11:05.742 real 0m1.043s 00:11:05.742 user 0m0.755s 00:11:05.742 sys 0m0.180s 00:11:05.742 19:32:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:11:05.742 19:32:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.742 19:32:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:05.742 ************************************ 00:11:05.742 END TEST bdev_json_nonenclosed 00:11:05.742 ************************************ 00:11:05.742 19:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:11:05.742 19:32:56 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:11:05.742 19:32:56 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:05.742 19:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:05.742 19:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.742 19:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:05.742 ************************************ 00:11:05.742 START TEST bdev_json_nonarray 00:11:05.742 ************************************ 00:11:05.742 19:32:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:05.742 [2024-07-15 19:32:56.496168] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:05.742 [2024-07-15 19:32:56.496346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69366 ] 00:11:06.000 [2024-07-15 19:32:56.683067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.258 [2024-07-15 19:32:56.933602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.258 [2024-07-15 19:32:56.933699] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:06.258 [2024-07-15 19:32:56.933719] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:06.258 [2024-07-15 19:32:56.933736] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.826 00:11:06.826 real 0m1.044s 00:11:06.826 user 0m0.759s 00:11:06.826 sys 0m0.178s 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.826 ************************************ 00:11:06.826 END TEST bdev_json_nonarray 00:11:06.826 ************************************ 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:06.826 19:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:11:06.826 19:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:11:06.826 19:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:11:06.826 19:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:11:06.826 19:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:06.826 19:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:06.826 19:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.826 19:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:06.826 ************************************ 00:11:06.826 START TEST bdev_gpt_uuid 00:11:06.826 ************************************ 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69402 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69402 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69402 ']' 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.826 19:32:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.826 [2024-07-15 19:32:57.586948] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:06.826 [2024-07-15 19:32:57.587090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69402 ] 00:11:07.084 [2024-07-15 19:32:57.759986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.345 [2024-07-15 19:32:58.062035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:08.720 Some configs were skipped because the RPC state that can call them passed over. 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:11:08.720 { 00:11:08.720 "name": "Nvme0n1p1", 00:11:08.720 "aliases": [ 00:11:08.720 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:08.720 ], 00:11:08.720 "product_name": "GPT Disk", 00:11:08.720 "block_size": 4096, 00:11:08.720 "num_blocks": 774144, 00:11:08.720 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:08.720 "md_size": 64, 00:11:08.720 "md_interleave": false, 00:11:08.720 "dif_type": 0, 00:11:08.720 "assigned_rate_limits": { 00:11:08.720 "rw_ios_per_sec": 0, 00:11:08.720 "rw_mbytes_per_sec": 0, 00:11:08.720 "r_mbytes_per_sec": 0, 00:11:08.720 "w_mbytes_per_sec": 0 00:11:08.720 }, 00:11:08.720 "claimed": false, 00:11:08.720 "zoned": false, 00:11:08.720 "supported_io_types": { 00:11:08.720 "read": true, 00:11:08.720 "write": true, 00:11:08.720 "unmap": true, 00:11:08.720 "flush": true, 00:11:08.720 "reset": true, 00:11:08.720 "nvme_admin": false, 00:11:08.720 "nvme_io": false, 00:11:08.720 "nvme_io_md": false, 00:11:08.720 "write_zeroes": true, 00:11:08.720 "zcopy": false, 00:11:08.720 "get_zone_info": false, 00:11:08.720 "zone_management": false, 00:11:08.720 "zone_append": false, 00:11:08.720 "compare": true, 00:11:08.720 "compare_and_write": false, 00:11:08.720 "abort": true, 00:11:08.720 "seek_hole": false, 00:11:08.720 "seek_data": false, 00:11:08.720 "copy": true, 00:11:08.720 "nvme_iov_md": false 00:11:08.720 }, 00:11:08.720 "driver_specific": { 00:11:08.720 "gpt": { 00:11:08.720 "base_bdev": "Nvme0n1", 00:11:08.720 "offset_blocks": 256, 00:11:08.720 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:08.720 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:08.720 "partition_name": "SPDK_TEST_first" 00:11:08.720 } 00:11:08.720 } 00:11:08.720 } 00:11:08.720 ]' 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:11:08.720 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.978 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:11:08.978 { 00:11:08.978 "name": "Nvme0n1p2", 00:11:08.978 "aliases": [ 00:11:08.978 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:08.978 ], 00:11:08.978 "product_name": "GPT Disk", 00:11:08.978 "block_size": 4096, 00:11:08.978 "num_blocks": 774143, 00:11:08.978 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:08.978 "md_size": 64, 00:11:08.978 "md_interleave": false, 00:11:08.978 "dif_type": 0, 00:11:08.978 "assigned_rate_limits": { 00:11:08.978 "rw_ios_per_sec": 0, 00:11:08.978 "rw_mbytes_per_sec": 0, 00:11:08.978 "r_mbytes_per_sec": 0, 00:11:08.978 "w_mbytes_per_sec": 0 00:11:08.978 }, 00:11:08.978 "claimed": false, 00:11:08.978 "zoned": false, 00:11:08.978 "supported_io_types": { 00:11:08.978 "read": true, 00:11:08.978 "write": true, 00:11:08.978 "unmap": true, 00:11:08.978 "flush": true, 00:11:08.978 "reset": true, 00:11:08.978 "nvme_admin": false, 00:11:08.978 "nvme_io": false, 00:11:08.978 "nvme_io_md": false, 00:11:08.978 "write_zeroes": true, 00:11:08.978 "zcopy": false, 00:11:08.978 "get_zone_info": false, 00:11:08.978 "zone_management": false, 00:11:08.978 "zone_append": false, 00:11:08.978 "compare": true, 00:11:08.978 "compare_and_write": false, 00:11:08.978 "abort": true, 00:11:08.978 "seek_hole": false, 00:11:08.978 "seek_data": false, 00:11:08.978 "copy": true, 00:11:08.978 "nvme_iov_md": false 00:11:08.978 }, 00:11:08.978 "driver_specific": { 00:11:08.978 "gpt": { 00:11:08.978 "base_bdev": "Nvme0n1", 00:11:08.978 "offset_blocks": 774400, 00:11:08.979 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:08.979 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:08.979 "partition_name": "SPDK_TEST_second" 00:11:08.979 } 00:11:08.979 } 00:11:08.979 } 00:11:08.979 ]' 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 69402 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69402 ']' 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69402 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69402 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:08.979 killing process with pid 69402 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69402' 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69402 00:11:08.979 19:32:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69402 00:11:12.253 00:11:12.253 real 0m4.980s 00:11:12.253 user 0m5.173s 00:11:12.253 sys 0m0.524s 00:11:12.253 19:33:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.253 19:33:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:12.253 ************************************ 00:11:12.253 END TEST bdev_gpt_uuid 00:11:12.253 ************************************ 00:11:12.253 19:33:02 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:12.253 19:33:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:12.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:12.510 Waiting for block devices as requested 00:11:12.510 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.767 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.767 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.023 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.280 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:18.280 19:33:08 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:11:18.280 19:33:08 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:11:18.280 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:18.280 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:18.280 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:18.280 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:11:18.280 19:33:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:18.280 00:11:18.280 real 1m11.409s 00:11:18.280 user 1m29.367s 00:11:18.280 sys 0m11.979s 00:11:18.280 19:33:08 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.280 ************************************ 00:11:18.280 END TEST blockdev_nvme_gpt 00:11:18.280 ************************************ 00:11:18.280 19:33:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:18.280 19:33:08 -- common/autotest_common.sh@1142 -- # return 0 00:11:18.280 19:33:08 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:18.280 19:33:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:18.280 19:33:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.280 19:33:08 -- common/autotest_common.sh@10 -- # set +x 00:11:18.280 ************************************ 00:11:18.280 START TEST nvme 00:11:18.280 ************************************ 00:11:18.280 19:33:08 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:18.280 * Looking for test storage... 00:11:18.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:18.280 19:33:09 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:18.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:19.843 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.843 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.843 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.843 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.843 19:33:10 nvme -- nvme/nvme.sh@79 -- # uname 00:11:19.843 19:33:10 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:19.843 19:33:10 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:19.843 19:33:10 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1069 -- # stubpid=70052 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:19.843 Waiting for stub to ready for secondary processes... 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/70052 ]] 00:11:19.843 19:33:10 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:11:19.843 [2024-07-15 19:33:10.557464] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:19.843 [2024-07-15 19:33:10.557666] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:20.777 19:33:11 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:20.777 19:33:11 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/70052 ]] 00:11:20.777 19:33:11 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:11:21.039 [2024-07-15 19:33:11.623708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.306 [2024-07-15 19:33:11.932672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.306 [2024-07-15 19:33:11.932738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.306 [2024-07-15 19:33:11.932764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.306 [2024-07-15 19:33:11.953642] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:21.306 [2024-07-15 19:33:11.953714] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:21.306 [2024-07-15 19:33:11.967565] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:21.306 [2024-07-15 19:33:11.967727] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:21.306 [2024-07-15 19:33:11.971484] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:21.306 [2024-07-15 19:33:11.971739] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:21.306 [2024-07-15 19:33:11.971863] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:21.306 [2024-07-15 19:33:11.975638] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:21.306 [2024-07-15 19:33:11.976136] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:21.306 [2024-07-15 19:33:11.976492] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:21.306 [2024-07-15 19:33:11.979808] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:21.306 [2024-07-15 19:33:11.980118] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:21.306 [2024-07-15 19:33:11.980312] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:21.306 [2024-07-15 19:33:11.980417] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:21.306 [2024-07-15 19:33:11.980512] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:21.872 19:33:12 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:21.872 done. 00:11:21.872 19:33:12 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:11:21.872 19:33:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:21.872 19:33:12 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:21.872 19:33:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.872 19:33:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:21.872 ************************************ 00:11:21.872 START TEST nvme_reset 00:11:21.872 ************************************ 00:11:21.872 19:33:12 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:22.130 Initializing NVMe Controllers 00:11:22.130 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:22.130 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:22.130 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:22.130 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:22.130 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:22.130 ************************************ 00:11:22.130 END TEST nvme_reset 00:11:22.130 ************************************ 00:11:22.130 00:11:22.130 real 0m0.360s 00:11:22.130 user 0m0.125s 00:11:22.130 sys 0m0.186s 00:11:22.130 19:33:12 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.130 19:33:12 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:22.388 19:33:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:22.388 19:33:12 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:22.388 19:33:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:22.388 19:33:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.388 19:33:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:22.388 ************************************ 00:11:22.388 START TEST nvme_identify 00:11:22.388 ************************************ 00:11:22.388 19:33:12 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:11:22.388 19:33:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:22.388 19:33:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:22.389 19:33:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:22.389 19:33:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:22.389 19:33:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:22.389 19:33:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:11:22.389 19:33:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:22.389 19:33:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:22.389 19:33:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:22.389 19:33:13 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:22.389 19:33:13 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:22.389 19:33:13 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:22.656 [2024-07-15 19:33:13.301528] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 70086 terminated unexpected 00:11:22.656 ===================================================== 00:11:22.656 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:22.656 ===================================================== 00:11:22.656 Controller Capabilities/Features 00:11:22.656 ================================ 00:11:22.656 Vendor ID: 1b36 00:11:22.656 Subsystem Vendor ID: 1af4 00:11:22.656 Serial Number: 12340 00:11:22.657 Model Number: QEMU NVMe Ctrl 00:11:22.657 Firmware Version: 8.0.0 00:11:22.657 Recommended Arb Burst: 6 00:11:22.657 IEEE OUI Identifier: 00 54 52 00:11:22.657 Multi-path I/O 00:11:22.657 May have multiple subsystem ports: No 00:11:22.657 May have multiple controllers: No 00:11:22.657 Associated with SR-IOV VF: No 00:11:22.657 Max Data Transfer Size: 524288 00:11:22.657 Max Number of Namespaces: 256 00:11:22.657 Max Number of I/O Queues: 64 00:11:22.657 NVMe Specification Version (VS): 1.4 00:11:22.657 NVMe Specification Version (Identify): 1.4 00:11:22.657 Maximum Queue Entries: 2048 00:11:22.657 Contiguous Queues Required: Yes 00:11:22.657 Arbitration Mechanisms Supported 00:11:22.657 Weighted Round Robin: Not Supported 00:11:22.657 Vendor Specific: Not Supported 00:11:22.657 Reset Timeout: 7500 ms 00:11:22.657 Doorbell Stride: 4 bytes 00:11:22.657 NVM Subsystem Reset: Not Supported 00:11:22.657 Command Sets Supported 00:11:22.657 NVM Command Set: Supported 00:11:22.657 Boot Partition: Not Supported 00:11:22.657 Memory Page Size Minimum: 4096 bytes 00:11:22.657 Memory Page Size Maximum: 65536 bytes 00:11:22.657 Persistent Memory Region: Not Supported 00:11:22.657 Optional Asynchronous Events Supported 00:11:22.657 Namespace Attribute Notices: Supported 00:11:22.657 Firmware Activation Notices: Not Supported 00:11:22.657 ANA Change Notices: Not Supported 00:11:22.657 PLE Aggregate Log Change Notices: Not Supported 00:11:22.657 LBA Status Info Alert Notices: Not Supported 00:11:22.657 EGE Aggregate Log Change Notices: Not Supported 00:11:22.657 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.657 Zone Descriptor Change Notices: Not Supported 00:11:22.657 Discovery Log Change Notices: Not Supported 00:11:22.657 Controller Attributes 00:11:22.657 128-bit Host Identifier: Not Supported 00:11:22.657 Non-Operational Permissive Mode: Not Supported 00:11:22.657 NVM Sets: Not Supported 00:11:22.657 Read Recovery Levels: Not Supported 00:11:22.658 Endurance Groups: Not Supported 00:11:22.658 Predictable Latency Mode: Not Supported 00:11:22.658 Traffic Based Keep ALive: Not Supported 00:11:22.658 Namespace Granularity: Not Supported 00:11:22.658 SQ Associations: Not Supported 00:11:22.658 UUID List: Not Supported 00:11:22.658 Multi-Domain Subsystem: Not Supported 00:11:22.658 Fixed Capacity Management: Not Supported 00:11:22.658 Variable Capacity Management: Not Supported 00:11:22.658 Delete Endurance Group: Not Supported 00:11:22.658 Delete NVM Set: Not Supported 00:11:22.658 Extended LBA Formats Supported: Supported 00:11:22.658 Flexible Data Placement Supported: Not Supported 00:11:22.658 00:11:22.658 Controller Memory Buffer Support 00:11:22.658 ================================ 00:11:22.658 Supported: No 00:11:22.658 00:11:22.658 Persistent Memory Region Support 00:11:22.658 ================================ 00:11:22.658 Supported: No 00:11:22.658 00:11:22.658 Admin Command Set Attributes 00:11:22.658 ============================ 00:11:22.658 Security Send/Receive: Not Supported 00:11:22.658 Format NVM: Supported 00:11:22.658 Firmware Activate/Download: Not Supported 00:11:22.658 Namespace Management: Supported 00:11:22.658 Device Self-Test: Not Supported 00:11:22.658 Directives: Supported 00:11:22.658 NVMe-MI: Not Supported 00:11:22.658 Virtualization Management: Not Supported 00:11:22.658 Doorbell Buffer Config: Supported 00:11:22.658 Get LBA Status Capability: Not Supported 00:11:22.658 Command & Feature Lockdown Capability: Not Supported 00:11:22.658 Abort Command Limit: 4 00:11:22.658 Async Event Request Limit: 4 00:11:22.658 Number of Firmware Slots: N/A 00:11:22.658 Firmware Slot 1 Read-Only: N/A 00:11:22.658 Firmware Activation Without Reset: N/A 00:11:22.658 Multiple Update Detection Support: N/A 00:11:22.658 Firmware Update Granularity: No Information Provided 00:11:22.658 Per-Namespace SMART Log: Yes 00:11:22.658 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.658 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:22.658 Command Effects Log Page: Supported 00:11:22.658 Get Log Page Extended Data: Supported 00:11:22.658 Telemetry Log Pages: Not Supported 00:11:22.659 Persistent Event Log Pages: Not Supported 00:11:22.659 Supported Log Pages Log Page: May Support 00:11:22.659 Commands Supported & Effects Log Page: Not Supported 00:11:22.659 Feature Identifiers & Effects Log Page:May Support 00:11:22.659 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.659 Data Area 4 for Telemetry Log: Not Supported 00:11:22.659 Error Log Page Entries Supported: 1 00:11:22.659 Keep Alive: Not Supported 00:11:22.659 00:11:22.659 NVM Command Set Attributes 00:11:22.659 ========================== 00:11:22.659 Submission Queue Entry Size 00:11:22.659 Max: 64 00:11:22.659 Min: 64 00:11:22.659 Completion Queue Entry Size 00:11:22.659 Max: 16 00:11:22.659 Min: 16 00:11:22.659 Number of Namespaces: 256 00:11:22.659 Compare Command: Supported 00:11:22.659 Write Uncorrectable Command: Not Supported 00:11:22.659 Dataset Management Command: Supported 00:11:22.659 Write Zeroes Command: Supported 00:11:22.659 Set Features Save Field: Supported 00:11:22.659 Reservations: Not Supported 00:11:22.659 Timestamp: Supported 00:11:22.659 Copy: Supported 00:11:22.659 Volatile Write Cache: Present 00:11:22.659 Atomic Write Unit (Normal): 1 00:11:22.659 Atomic Write Unit (PFail): 1 00:11:22.660 Atomic Compare & Write Unit: 1 00:11:22.660 Fused Compare & Write: Not Supported 00:11:22.660 Scatter-Gather List 00:11:22.660 SGL Command Set: Supported 00:11:22.660 SGL Keyed: Not Supported 00:11:22.660 SGL Bit Bucket Descriptor: Not Supported 00:11:22.660 SGL Metadata Pointer: Not Supported 00:11:22.660 Oversized SGL: Not Supported 00:11:22.660 SGL Metadata Address: Not Supported 00:11:22.660 SGL Offset: Not Supported 00:11:22.660 Transport SGL Data Block: Not Supported 00:11:22.660 Replay Protected Memory Block: Not Supported 00:11:22.660 00:11:22.660 Firmware Slot Information 00:11:22.660 ========================= 00:11:22.660 Active slot: 1 00:11:22.660 Slot 1 Firmware Revision: 1.0 00:11:22.660 00:11:22.660 00:11:22.660 Commands Supported and Effects 00:11:22.660 ============================== 00:11:22.660 Admin Commands 00:11:22.660 -------------- 00:11:22.660 Delete I/O Submission Queue (00h): Supported 00:11:22.660 Create I/O Submission Queue (01h): Supported 00:11:22.660 Get Log Page (02h): Supported 00:11:22.660 Delete I/O Completion Queue (04h): Supported 00:11:22.660 Create I/O Completion Queue (05h): Supported 00:11:22.660 Identify (06h): Supported 00:11:22.660 Abort (08h): Supported 00:11:22.660 Set Features (09h): Supported 00:11:22.660 Get Features (0Ah): Supported 00:11:22.660 Asynchronous Event Request (0Ch): Supported 00:11:22.660 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.660 Directive Send (19h): Supported 00:11:22.660 Directive Receive (1Ah): Supported 00:11:22.660 Virtualization Management (1Ch): Supported 00:11:22.660 Doorbell Buffer Config (7Ch): Supported 00:11:22.660 Format NVM (80h): Supported LBA-Change 00:11:22.661 I/O Commands 00:11:22.661 ------------ 00:11:22.661 Flush (00h): Supported LBA-Change 00:11:22.661 Write (01h): Supported LBA-Change 00:11:22.661 Read (02h): Supported 00:11:22.661 Compare (05h): Supported 00:11:22.661 Write Zeroes (08h): Supported LBA-Change 00:11:22.661 Dataset Management (09h): Supported LBA-Change 00:11:22.661 Unknown (0Ch): Supported 00:11:22.661 Unknown (12h): Supported 00:11:22.661 Copy (19h): Supported LBA-Change 00:11:22.661 Unknown (1Dh): Supported LBA-Change 00:11:22.661 00:11:22.661 Error Log 00:11:22.661 ========= 00:11:22.661 00:11:22.661 Arbitration 00:11:22.661 =========== 00:11:22.661 Arbitration Burst: no limit 00:11:22.661 00:11:22.661 Power Management 00:11:22.661 ================ 00:11:22.661 Number of Power States: 1 00:11:22.661 Current Power State: Power State #0 00:11:22.661 Power State #0: 00:11:22.661 Max Power: 25.00 W 00:11:22.661 Non-Operational State: Operational 00:11:22.661 Entry Latency: 16 microseconds 00:11:22.661 Exit Latency: 4 microseconds 00:11:22.661 Relative Read Throughput: 0 00:11:22.661 Relative Read Latency: 0 00:11:22.661 Relative Write Throughput: 0 00:11:22.661 Relative Write Latency: 0 00:11:22.661 Idle Power[2024-07-15 19:33:13.303082] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 70086 terminated unexpected 00:11:22.661 : Not Reported 00:11:22.661 Active Power: Not Reported 00:11:22.661 Non-Operational Permissive Mode: Not Supported 00:11:22.661 00:11:22.661 Health Information 00:11:22.661 ================== 00:11:22.661 Critical Warnings: 00:11:22.661 Available Spare Space: OK 00:11:22.661 Temperature: OK 00:11:22.662 Device Reliability: OK 00:11:22.662 Read Only: No 00:11:22.662 Volatile Memory Backup: OK 00:11:22.662 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.662 Available Spare: 0% 00:11:22.662 Available Spare Threshold: 0% 00:11:22.662 Life Percentage Used: 0% 00:11:22.662 Data Units Read: 1116 00:11:22.662 Data Units Written: 949 00:11:22.662 Host Read Commands: 49730 00:11:22.662 Host Write Commands: 48212 00:11:22.662 Controller Busy Time: 0 minutes 00:11:22.662 Power Cycles: 0 00:11:22.662 Power On Hours: 0 hours 00:11:22.662 Unsafe Shutdowns: 0 00:11:22.662 Unrecoverable Media Errors: 0 00:11:22.662 Lifetime Error Log Entries: 0 00:11:22.662 Warning Temperature Time: 0 minutes 00:11:22.662 Critical Temperature Time: 0 minutes 00:11:22.662 00:11:22.662 Number of Queues 00:11:22.662 ================ 00:11:22.662 Number of I/O Submission Queues: 64 00:11:22.662 Number of I/O Completion Queues: 64 00:11:22.662 00:11:22.662 ZNS Specific Controller Data 00:11:22.662 ============================ 00:11:22.662 Zone Append Size Limit: 0 00:11:22.662 00:11:22.662 00:11:22.662 Active Namespaces 00:11:22.662 ================= 00:11:22.662 Namespace ID:1 00:11:22.662 Error Recovery Timeout: Unlimited 00:11:22.663 Command Set Identifier: NVM (00h) 00:11:22.663 Deallocate: Supported 00:11:22.663 Deallocated/Unwritten Error: Supported 00:11:22.663 Deallocated Read Value: All 0x00 00:11:22.663 Deallocate in Write Zeroes: Not Supported 00:11:22.663 Deallocated Guard Field: 0xFFFF 00:11:22.663 Flush: Supported 00:11:22.663 Reservation: Not Supported 00:11:22.663 Metadata Transferred as: Separate Metadata Buffer 00:11:22.663 Namespace Sharing Capabilities: Private 00:11:22.663 Size (in LBAs): 1548666 (5GiB) 00:11:22.663 Capacity (in LBAs): 1548666 (5GiB) 00:11:22.663 Utilization (in LBAs): 1548666 (5GiB) 00:11:22.663 Thin Provisioning: Not Supported 00:11:22.663 Per-NS Atomic Units: No 00:11:22.663 Maximum Single Source Range Length: 128 00:11:22.663 Maximum Copy Length: 128 00:11:22.663 Maximum Source Range Count: 128 00:11:22.663 NGUID/EUI64 Never Reused: No 00:11:22.663 Namespace Write Protected: No 00:11:22.663 Number of LBA Formats: 8 00:11:22.663 Current LBA Format: LBA Format #07 00:11:22.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.663 00:11:22.664 NVM Specific Namespace Data 00:11:22.664 =========================== 00:11:22.664 Logical Block Storage Tag Mask: 0 00:11:22.664 Protection Information Capabilities: 00:11:22.664 16b Guard Protection Information Storage Tag Support: No 00:11:22.664 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.664 Storage Tag Check Read Support: No 00:11:22.664 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.664 ===================================================== 00:11:22.664 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:22.664 ===================================================== 00:11:22.664 Controller Capabilities/Features 00:11:22.664 ================================ 00:11:22.664 Vendor ID: 1b36 00:11:22.664 Subsystem Vendor ID: 1af4 00:11:22.664 Serial Number: 12341 00:11:22.664 Model Number: QEMU NVMe Ctrl 00:11:22.664 Firmware Version: 8.0.0 00:11:22.664 Recommended Arb Burst: 6 00:11:22.664 IEEE OUI Identifier: 00 54 52 00:11:22.664 Multi-path I/O 00:11:22.664 May have multiple subsystem ports: No 00:11:22.664 May have multiple controllers: No 00:11:22.665 Associated with SR-IOV VF: No 00:11:22.665 Max Data Transfer Size: 524288 00:11:22.665 Max Number of Namespaces: 256 00:11:22.665 Max Number of I/O Queues: 64 00:11:22.665 NVMe Specification Version (VS): 1.4 00:11:22.665 NVMe Specification Version (Identify): 1.4 00:11:22.665 Maximum Queue Entries: 2048 00:11:22.665 Contiguous Queues Required: Yes 00:11:22.665 Arbitration Mechanisms Supported 00:11:22.665 Weighted Round Robin: Not Supported 00:11:22.665 Vendor Specific: Not Supported 00:11:22.665 Reset Timeout: 7500 ms 00:11:22.665 Doorbell Stride: 4 bytes 00:11:22.665 NVM Subsystem Reset: Not Supported 00:11:22.665 Command Sets Supported 00:11:22.665 NVM Command Set: Supported 00:11:22.665 Boot Partition: Not Supported 00:11:22.665 Memory Page Size Minimum: 4096 bytes 00:11:22.665 Memory Page Size Maximum: 65536 bytes 00:11:22.665 Persistent Memory Region: Not Supported 00:11:22.665 Optional Asynchronous Events Supported 00:11:22.665 Namespace Attribute Notices: Supported 00:11:22.665 Firmware Activation Notices: Not Supported 00:11:22.665 ANA Change Notices: Not Supported 00:11:22.665 PLE Aggregate Log Change Notices: Not Supported 00:11:22.665 LBA Status Info Alert Notices: Not Supported 00:11:22.665 EGE Aggregate Log Change Notices: Not Supported 00:11:22.665 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.665 Zone Descriptor Change Notices: Not Supported 00:11:22.666 Discovery Log Change Notices: Not Supported 00:11:22.666 Controller Attributes 00:11:22.666 128-bit Host Identifier: Not Supported 00:11:22.666 Non-Operational Permissive Mode: Not Supported 00:11:22.666 NVM Sets: Not Supported 00:11:22.666 Read Recovery Levels: Not Supported 00:11:22.666 Endurance Groups: Not Supported 00:11:22.666 Predictable Latency Mode: Not Supported 00:11:22.666 Traffic Based Keep ALive: Not Supported 00:11:22.666 Namespace Granularity: Not Supported 00:11:22.666 SQ Associations: Not Supported 00:11:22.666 UUID List: Not Supported 00:11:22.666 Multi-Domain Subsystem: Not Supported 00:11:22.666 Fixed Capacity Management: Not Supported 00:11:22.666 Variable Capacity Management: Not Supported 00:11:22.666 Delete Endurance Group: Not Supported 00:11:22.666 Delete NVM Set: Not Supported 00:11:22.666 Extended LBA Formats Supported: Supported 00:11:22.666 Flexible Data Placement Supported: Not Supported 00:11:22.666 00:11:22.666 Controller Memory Buffer Support 00:11:22.666 ================================ 00:11:22.666 Supported: No 00:11:22.666 00:11:22.666 Persistent Memory Region Support 00:11:22.666 ================================ 00:11:22.666 Supported: No 00:11:22.666 00:11:22.666 Admin Command Set Attributes 00:11:22.666 ============================ 00:11:22.666 Security Send/Receive: Not Supported 00:11:22.666 Format NVM: Supported 00:11:22.666 Firmware Activate/Download: Not Supported 00:11:22.666 Namespace Management: Supported 00:11:22.666 Device Self-Test: Not Supported 00:11:22.666 Directives: Supported 00:11:22.666 NVMe-MI: Not Supported 00:11:22.666 Virtualization Management: Not Supported 00:11:22.666 Doorbell Buffer Config: Supported 00:11:22.666 Get LBA Status Capability: Not Supported 00:11:22.666 Command & Feature Lockdown Capability: Not Supported 00:11:22.666 Abort Command Limit: 4 00:11:22.666 Async Event Request Limit: 4 00:11:22.666 Number of Firmware Slots: N/A 00:11:22.666 Firmware Slot 1 Read-Only: N/A 00:11:22.666 Firmware Activation Without Reset: N/A 00:11:22.666 Multiple Update Detection Support: N/A 00:11:22.667 Firmware Update Granularity: No Information Provided 00:11:22.667 Per-Namespace SMART Log: Yes 00:11:22.667 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.667 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:22.667 Command Effects Log Page: Supported 00:11:22.667 Get Log Page Extended Data: Supported 00:11:22.667 Telemetry Log Pages: Not Supported 00:11:22.667 Persistent Event Log Pages: Not Supported 00:11:22.667 Supported Log Pages Log Page: May Support 00:11:22.667 Commands Supported & Effects Log Page: Not Supported 00:11:22.667 Feature Identifiers & Effects Log Page:May Support 00:11:22.667 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.667 Data Area 4 for Telemetry Log: Not Supported 00:11:22.667 Error Log Page Entries Supported: 1 00:11:22.667 Keep Alive: Not Supported 00:11:22.667 00:11:22.667 NVM Command Set Attributes 00:11:22.667 ========================== 00:11:22.667 Submission Queue Entry Size 00:11:22.667 Max: 64 00:11:22.667 Min: 64 00:11:22.667 Completion Queue Entry Size 00:11:22.667 Max: 16 00:11:22.667 Min: 16 00:11:22.667 Number of Namespaces: 256 00:11:22.667 Compare Command: Supported 00:11:22.667 Write Uncorrectable Command: Not Supported 00:11:22.667 Dataset Management Command: Supported 00:11:22.667 Write Zeroes Command: Supported 00:11:22.667 Set Features Save Field: Supported 00:11:22.667 Reservations: Not Supported 00:11:22.667 Timestamp: Supported 00:11:22.667 Copy: Supported 00:11:22.667 Volatile Write Cache: Present 00:11:22.667 Atomic Write Unit (Normal): 1 00:11:22.667 Atomic Write Unit (PFail): 1 00:11:22.667 Atomic Compare & Write Unit: 1 00:11:22.667 Fused Compare & Write: Not Supported 00:11:22.667 Scatter-Gather List 00:11:22.667 SGL Command Set: Supported 00:11:22.667 SGL Keyed: Not Supported 00:11:22.667 SGL Bit Bucket Descriptor: Not Supported 00:11:22.667 SGL Metadata Pointer: Not Supported 00:11:22.667 Oversized SGL: Not Supported 00:11:22.667 SGL Metadata Address: Not Supported 00:11:22.667 SGL Offset: Not Supported 00:11:22.667 Transport SGL Data Block: Not Supported 00:11:22.667 Replay Protected Memory Block: Not Supported 00:11:22.667 00:11:22.667 Firmware Slot Information 00:11:22.667 ========================= 00:11:22.667 Active slot: 1 00:11:22.667 Slot 1 Firmware Revision: 1.0 00:11:22.667 00:11:22.667 00:11:22.667 Commands Supported and Effects 00:11:22.667 ============================== 00:11:22.667 Admin Commands 00:11:22.667 -------------- 00:11:22.667 Delete I/O Submission Queue (00h): Supported 00:11:22.667 Create I/O Submission Queue (01h): Supported 00:11:22.667 Get Log Page (02h): Supported 00:11:22.667 Delete I/O Completion Queue (04h): Supported 00:11:22.667 Create I/O Completion Queue (05h): Supported 00:11:22.667 Identify (06h): Supported 00:11:22.667 Abort (08h): Supported 00:11:22.667 Set Features (09h): Supported 00:11:22.667 Get Features (0Ah): Supported 00:11:22.667 Asynchronous Event Request (0Ch): Supported 00:11:22.667 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.667 Directive Send (19h): Supported 00:11:22.667 Directive Receive (1Ah): Supported 00:11:22.667 Virtualization Management (1Ch): Supported 00:11:22.668 Doorbell Buffer Config (7Ch): Supported 00:11:22.668 Format NVM (80h): Supported LBA-Change 00:11:22.668 I/O Commands 00:11:22.668 ------------ 00:11:22.668 Flush (00h): Supported LBA-Change 00:11:22.668 Write (01h): Supported LBA-Change 00:11:22.668 Read (02h): Supported 00:11:22.668 Compare (05h): Supported 00:11:22.668 Write Zeroes (08h): Supported LBA-Change 00:11:22.668 Dataset Management (09h): Supported LBA-Change 00:11:22.668 Unknown (0Ch): Supported 00:11:22.668 Unknown (12h): Supported 00:11:22.668 Copy (19h): Supported LBA-Change 00:11:22.668 Unknown (1Dh): Supported LBA-Change 00:11:22.668 00:11:22.668 Error Log 00:11:22.668 ========= 00:11:22.668 00:11:22.668 Arbitration 00:11:22.668 =========== 00:11:22.668 Arbitration Burst: no limit 00:11:22.668 00:11:22.668 Power Management 00:11:22.668 ================ 00:11:22.668 Number of Power States: 1 00:11:22.668 Current Power State: Power State #0 00:11:22.668 Power State #0: 00:11:22.668 Max Power: 25.00 W 00:11:22.668 Non-Operational State: Operational 00:11:22.668 Entry Latency: 16 microseconds 00:11:22.668 Exit Latency: 4 microseconds 00:11:22.668 Relative Read Throughput: 0 00:11:22.668 Relative Read Latency: 0 00:11:22.668 Relative Write Throughput: 0 00:11:22.668 Relative Write Latency: 0 00:11:22.668 Idle Power: Not Reported 00:11:22.668 Active Power: Not Reported 00:11:22.668 Non-Operational Permissive Mode: Not Supported 00:11:22.669 00:11:22.669 Health Information 00:11:22.669 ================== 00:11:22.669 Critical Warnings: 00:11:22.669 Available Spare Space: OK 00:11:22.669 Temperature: [2024-07-15 19:33:13.304207] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 70086 terminated unexpected 00:11:22.669 OK 00:11:22.669 Device Reliability: OK 00:11:22.669 Read Only: No 00:11:22.669 Volatile Memory Backup: OK 00:11:22.669 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.669 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.669 Available Spare: 0% 00:11:22.669 Available Spare Threshold: 0% 00:11:22.669 Life Percentage Used: 0% 00:11:22.669 Data Units Read: 810 00:11:22.669 Data Units Written: 661 00:11:22.669 Host Read Commands: 35727 00:11:22.669 Host Write Commands: 33474 00:11:22.669 Controller Busy Time: 0 minutes 00:11:22.669 Power Cycles: 0 00:11:22.669 Power On Hours: 0 hours 00:11:22.669 Unsafe Shutdowns: 0 00:11:22.669 Unrecoverable Media Errors: 0 00:11:22.669 Lifetime Error Log Entries: 0 00:11:22.669 Warning Temperature Time: 0 minutes 00:11:22.669 Critical Temperature Time: 0 minutes 00:11:22.669 00:11:22.669 Number of Queues 00:11:22.669 ================ 00:11:22.669 Number of I/O Submission Queues: 64 00:11:22.669 Number of I/O Completion Queues: 64 00:11:22.669 00:11:22.669 ZNS Specific Controller Data 00:11:22.669 ============================ 00:11:22.669 Zone Append Size Limit: 0 00:11:22.670 00:11:22.670 00:11:22.670 Active Namespaces 00:11:22.670 ================= 00:11:22.670 Namespace ID:1 00:11:22.670 Error Recovery Timeout: Unlimited 00:11:22.670 Command Set Identifier: NVM (00h) 00:11:22.670 Deallocate: Supported 00:11:22.670 Deallocated/Unwritten Error: Supported 00:11:22.670 Deallocated Read Value: All 0x00 00:11:22.670 Deallocate in Write Zeroes: Not Supported 00:11:22.670 Deallocated Guard Field: 0xFFFF 00:11:22.670 Flush: Supported 00:11:22.670 Reservation: Not Supported 00:11:22.670 Namespace Sharing Capabilities: Private 00:11:22.670 Size (in LBAs): 1310720 (5GiB) 00:11:22.670 Capacity (in LBAs): 1310720 (5GiB) 00:11:22.670 Utilization (in LBAs): 1310720 (5GiB) 00:11:22.670 Thin Provisioning: Not Supported 00:11:22.670 Per-NS Atomic Units: No 00:11:22.670 Maximum Single Source Range Length: 128 00:11:22.670 Maximum Copy Length: 128 00:11:22.670 Maximum Source Range Count: 128 00:11:22.670 NGUID/EUI64 Never Reused: No 00:11:22.670 Namespace Write Protected: No 00:11:22.670 Number of LBA Formats: 8 00:11:22.670 Current LBA Format: LBA Format #04 00:11:22.670 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.670 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.670 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.670 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.670 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.670 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.670 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.670 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.670 00:11:22.670 NVM Specific Namespace Data 00:11:22.670 =========================== 00:11:22.670 Logical Block Storage Tag Mask: 0 00:11:22.670 Protection Information Capabilities: 00:11:22.670 16b Guard Protection Information Storage Tag Support: No 00:11:22.671 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.671 Storage Tag Check Read Support: No 00:11:22.671 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.671 ===================================================== 00:11:22.671 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:22.671 ===================================================== 00:11:22.671 Controller Capabilities/Features 00:11:22.671 ================================ 00:11:22.671 Vendor ID: 1b36 00:11:22.671 Subsystem Vendor ID: 1af4 00:11:22.671 Serial Number: 12343 00:11:22.672 Model Number: QEMU NVMe Ctrl 00:11:22.672 Firmware Version: 8.0.0 00:11:22.672 Recommended Arb Burst: 6 00:11:22.672 IEEE OUI Identifier: 00 54 52 00:11:22.672 Multi-path I/O 00:11:22.672 May have multiple subsystem ports: No 00:11:22.672 May have multiple controllers: Yes 00:11:22.672 Associated with SR-IOV VF: No 00:11:22.672 Max Data Transfer Size: 524288 00:11:22.672 Max Number of Namespaces: 256 00:11:22.672 Max Number of I/O Queues: 64 00:11:22.672 NVMe Specification Version (VS): 1.4 00:11:22.672 NVMe Specification Version (Identify): 1.4 00:11:22.672 Maximum Queue Entries: 2048 00:11:22.672 Contiguous Queues Required: Yes 00:11:22.672 Arbitration Mechanisms Supported 00:11:22.672 Weighted Round Robin: Not Supported 00:11:22.672 Vendor Specific: Not Supported 00:11:22.672 Reset Timeout: 7500 ms 00:11:22.672 Doorbell Stride: 4 bytes 00:11:22.672 NVM Subsystem Reset: Not Supported 00:11:22.672 Command Sets Supported 00:11:22.672 NVM Command Set: Supported 00:11:22.672 Boot Partition: Not Supported 00:11:22.672 Memory Page Size Minimum: 4096 bytes 00:11:22.672 Memory Page Size Maximum: 65536 bytes 00:11:22.672 Persistent Memory Region: Not Supported 00:11:22.672 Optional Asynchronous Events Supported 00:11:22.672 Namespace Attribute Notices: Supported 00:11:22.672 Firmware Activation Notices: Not Supported 00:11:22.672 ANA Change Notices: Not Supported 00:11:22.672 PLE Aggregate Log Change Notices: Not Supported 00:11:22.672 LBA Status Info Alert Notices: Not Supported 00:11:22.672 EGE Aggregate Log Change Notices: Not Supported 00:11:22.672 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.672 Zone Descriptor Change Notices: Not Supported 00:11:22.672 Discovery Log Change Notices: Not Supported 00:11:22.672 Controller Attributes 00:11:22.672 128-bit Host Identifier: Not Supported 00:11:22.672 Non-Operational Permissive Mode: Not Supported 00:11:22.672 NVM Sets: Not Supported 00:11:22.672 Read Recovery Levels: Not Supported 00:11:22.672 Endurance Groups: Supported 00:11:22.672 Predictable Latency Mode: Not Supported 00:11:22.672 Traffic Based Keep ALive: Not Supported 00:11:22.672 Namespace Granularity: Not Supported 00:11:22.672 SQ Associations: Not Supported 00:11:22.672 UUID List: Not Supported 00:11:22.673 Multi-Domain Subsystem: Not Supported 00:11:22.673 Fixed Capacity Management: Not Supported 00:11:22.673 Variable Capacity Management: Not Supported 00:11:22.673 Delete Endurance Group: Not Supported 00:11:22.673 Delete NVM Set: Not Supported 00:11:22.673 Extended LBA Formats Supported: Supported 00:11:22.673 Flexible Data Placement Supported: Supported 00:11:22.673 00:11:22.673 Controller Memory Buffer Support 00:11:22.673 ================================ 00:11:22.673 Supported: No 00:11:22.673 00:11:22.673 Persistent Memory Region Support 00:11:22.673 ================================ 00:11:22.673 Supported: No 00:11:22.673 00:11:22.673 Admin Command Set Attributes 00:11:22.673 ============================ 00:11:22.673 Security Send/Receive: Not Supported 00:11:22.673 Format NVM: Supported 00:11:22.673 Firmware Activate/Download: Not Supported 00:11:22.673 Namespace Management: Supported 00:11:22.673 Device Self-Test: Not Supported 00:11:22.673 Directives: Supported 00:11:22.673 NVMe-MI: Not Supported 00:11:22.673 Virtualization Management: Not Supported 00:11:22.673 Doorbell Buffer Config: Supported 00:11:22.673 Get LBA Status Capability: Not Supported 00:11:22.673 Command & Feature Lockdown Capability: Not Supported 00:11:22.673 Abort Command Limit: 4 00:11:22.673 Async Event Request Limit: 4 00:11:22.673 Number of Firmware Slots: N/A 00:11:22.673 Firmware Slot 1 Read-Only: N/A 00:11:22.674 Firmware Activation Without Reset: N/A 00:11:22.674 Multiple Update Detection Support: N/A 00:11:22.674 Firmware Update Granularity: No Information Provided 00:11:22.674 Per-Namespace SMART Log: Yes 00:11:22.674 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.674 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:22.674 Command Effects Log Page: Supported 00:11:22.674 Get Log Page Extended Data: Supported 00:11:22.674 Telemetry Log Pages: Not Supported 00:11:22.674 Persistent Event Log Pages: Not Supported 00:11:22.674 Supported Log Pages Log Page: May Support 00:11:22.674 Commands Supported & Effects Log Page: Not Supported 00:11:22.674 Feature Identifiers & Effects Log Page:May Support 00:11:22.674 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.674 Data Area 4 for Telemetry Log: Not Supported 00:11:22.674 Error Log Page Entries Supported: 1 00:11:22.674 Keep Alive: Not Supported 00:11:22.674 00:11:22.674 NVM Command Set Attributes 00:11:22.674 ========================== 00:11:22.674 Submission Queue Entry Size 00:11:22.674 Max: 64 00:11:22.674 Min: 64 00:11:22.674 Completion Queue Entry Size 00:11:22.674 Max: 16 00:11:22.675 Min: 16 00:11:22.675 Number of Namespaces: 256 00:11:22.675 Compare Command: Supported 00:11:22.675 Write Uncorrectable Command: Not Supported 00:11:22.675 Dataset Management Command: Supported 00:11:22.675 Write Zeroes Command: Supported 00:11:22.675 Set Features Save Field: Supported 00:11:22.675 Reservations: Not Supported 00:11:22.675 Timestamp: Supported 00:11:22.675 Copy: Supported 00:11:22.675 Volatile Write Cache: Present 00:11:22.675 Atomic Write Unit (Normal): 1 00:11:22.675 Atomic Write Unit (PFail): 1 00:11:22.675 Atomic Compare & Write Unit: 1 00:11:22.675 Fused Compare & Write: Not Supported 00:11:22.675 Scatter-Gather List 00:11:22.675 SGL Command Set: Supported 00:11:22.675 SGL Keyed: Not Supported 00:11:22.675 SGL Bit Bucket Descriptor: Not Supported 00:11:22.675 SGL Metadata Pointer: Not Supported 00:11:22.675 Oversized SGL: Not Supported 00:11:22.675 SGL Metadata Address: Not Supported 00:11:22.675 SGL Offset: Not Supported 00:11:22.675 Transport SGL Data Block: Not Supported 00:11:22.675 Replay Protected Memory Block: Not Supported 00:11:22.675 00:11:22.675 Firmware Slot Information 00:11:22.675 ========================= 00:11:22.675 Active slot: 1 00:11:22.675 Slot 1 Firmware Revision: 1.0 00:11:22.675 00:11:22.675 00:11:22.675 Commands Supported and Effects 00:11:22.675 ============================== 00:11:22.675 Admin Commands 00:11:22.675 -------------- 00:11:22.675 Delete I/O Submission Queue (00h): Supported 00:11:22.675 Create I/O Submission Queue (01h): Supported 00:11:22.675 Get Log Page (02h): Supported 00:11:22.675 Delete I/O Completion Queue (04h): Supported 00:11:22.675 Create I/O Completion Queue (05h): Supported 00:11:22.675 Identify (06h): Supported 00:11:22.676 Abort (08h): Supported 00:11:22.676 Set Features (09h): Supported 00:11:22.676 Get Features (0Ah): Supported 00:11:22.676 Asynchronous Event Request (0Ch): Supported 00:11:22.676 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.676 Directive Send (19h): Supported 00:11:22.676 Directive Receive (1Ah): Supported 00:11:22.676 Virtualization Management (1Ch): Supported 00:11:22.676 Doorbell Buffer Config (7Ch): Supported 00:11:22.676 Format NVM (80h): Supported LBA-Change 00:11:22.676 I/O Commands 00:11:22.676 ------------ 00:11:22.676 Flush (00h): Supported LBA-Change 00:11:22.676 Write (01h): Supported LBA-Change 00:11:22.676 Read (02h): Supported 00:11:22.676 Compare (05h): Supported 00:11:22.676 Write Zeroes (08h): Supported LBA-Change 00:11:22.676 Dataset Management (09h): Supported LBA-Change 00:11:22.676 Unknown (0Ch): Supported 00:11:22.676 Unknown (12h): Supported 00:11:22.676 Copy (19h): Supported LBA-Change 00:11:22.676 Unknown (1Dh): Supported LBA-Change 00:11:22.676 00:11:22.676 Error Log 00:11:22.676 ========= 00:11:22.676 00:11:22.676 Arbitration 00:11:22.676 =========== 00:11:22.676 Arbitration Burst: no limit 00:11:22.676 00:11:22.676 Power Management 00:11:22.676 ================ 00:11:22.676 Number of Power States: 1 00:11:22.676 Current Power State: Power State #0 00:11:22.676 Power State #0: 00:11:22.676 Max Power: 25.00 W 00:11:22.676 Non-Operational State: Operational 00:11:22.676 Entry Latency: 16 microseconds 00:11:22.676 Exit Latency: 4 microseconds 00:11:22.676 Relative Read Throughput: 0 00:11:22.676 Relative Read Latency: 0 00:11:22.677 Relative Write Throughput: 0 00:11:22.677 Relative Write Latency: 0 00:11:22.677 Idle Power: Not Reported 00:11:22.677 Active Power: Not Reported 00:11:22.677 Non-Operational Permissive Mode: Not Supported 00:11:22.677 00:11:22.677 Health Information 00:11:22.677 ================== 00:11:22.677 Critical Warnings: 00:11:22.677 Available Spare Space: OK 00:11:22.677 Temperature: OK 00:11:22.677 Device Reliability: OK 00:11:22.677 Read Only: No 00:11:22.677 Volatile Memory Backup: OK 00:11:22.677 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.677 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.677 Available Spare: 0% 00:11:22.677 Available Spare Threshold: 0% 00:11:22.677 Life Percentage Used: 0% 00:11:22.677 Data Units Read: 810 00:11:22.677 Data Units Written: 703 00:11:22.677 Host Read Commands: 35398 00:11:22.677 Host Write Commands: 33988 00:11:22.677 Controller Busy Time: 0 minutes 00:11:22.677 Power Cycles: 0 00:11:22.677 Power On Hours: 0 hours 00:11:22.677 Unsafe Shutdowns: 0 00:11:22.677 Unrecoverable Media Errors: 0 00:11:22.677 Lifetime Error Log Entries: 0 00:11:22.677 Warning Temperature Time: 0 minutes 00:11:22.677 Critical Temperature Time: 0 minutes 00:11:22.677 00:11:22.677 Number of Queues 00:11:22.677 ================ 00:11:22.678 Number of I/O Submission Queues: 64 00:11:22.678 Number of I/O Completion Queues: 64 00:11:22.678 00:11:22.678 ZNS Specific Controller Data 00:11:22.678 ============================ 00:11:22.678 Zone Append Size Limit: 0 00:11:22.678 00:11:22.678 00:11:22.678 Active Namespaces 00:11:22.678 ================= 00:11:22.678 Namespace ID:1 00:11:22.678 Error Recovery Timeout: Unlimited 00:11:22.678 Command Set Identifier: NVM (00h) 00:11:22.678 Deallocate: Supported 00:11:22.678 Deallocated/Unwritten Error: Supported 00:11:22.678 Deallocated Read Value: All 0x00 00:11:22.678 Deallocate in Write Zeroes: Not Supported 00:11:22.678 Deallocated Guard Field: 0xFFFF 00:11:22.678 Flush: Supported 00:11:22.678 Reservation: Not Supported 00:11:22.678 Namespace Sharing Capabilities: Multiple Controllers 00:11:22.678 Size (in LBAs): 262144 (1GiB) 00:11:22.678 Capacity (in LBAs): 262144 (1GiB) 00:11:22.678 Utilization (in LBAs): 262144 (1GiB) 00:11:22.678 Thin Provisioning: Not Supported 00:11:22.678 Per-NS Atomic Units: No 00:11:22.678 Maximum Single Source Range Length: 128 00:11:22.678 Maximum Copy Length: 128 00:11:22.678 Maximum Source Range Count: 128 00:11:22.678 NGUID/EUI64 Never Reused: No 00:11:22.678 Namespace Write Protected: No 00:11:22.678 Endurance group ID: 1 00:11:22.679 Number of LBA Formats: 8 00:11:22.679 Current LBA Format: LBA Format #04 00:11:22.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.679 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.679 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.679 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.679 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.679 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.679 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.679 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.679 00:11:22.679 Get Feature FDP: 00:11:22.679 ================ 00:11:22.679 Enabled: Yes 00:11:22.679 FDP configuration index: 0 00:11:22.679 00:11:22.679 FDP configurations log page 00:11:22.679 =========================== 00:11:22.679 Number of FDP configurations: 1 00:11:22.679 Version: 0 00:11:22.679 Size: 112 00:11:22.679 FDP Configuration Descriptor: 0 00:11:22.679 Descriptor Size: 96 00:11:22.679 Reclaim Group Identifier format: 2 00:11:22.679 FDP Volatile Write Cache: Not Present 00:11:22.679 FDP Configuration: Valid 00:11:22.679 Vendor Specific Size: 0 00:11:22.679 Number of Reclaim Groups: 2 00:11:22.679 Number of Recalim Unit Handles: 8 00:11:22.679 Max Placement Identifiers: 128 00:11:22.679 Number of Namespaces Suppprted: 256 00:11:22.679 Reclaim unit Nominal Size: 6000000 bytes 00:11:22.679 Estimated Reclaim Unit Time Limit: Not Reported 00:11:22.679 RUH Desc #000: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #001: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #002: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #003: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #004: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #005: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #006: RUH Type: Initially Isolated 00:11:22.679 RUH Desc #007: RUH Type: Initially Isolated 00:11:22.679 00:11:22.679 FDP reclaim unit handle usage log page 00:11:22.679 ====================================== 00:11:22.679 Number of Reclaim Unit Handles: 8 00:11:22.679 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:22.679 RUH Usage Desc #001: RUH Attributes: Unused 00:11:22.679 RUH Usage Desc #002: RUH Attributes: Unused 00:11:22.679 RUH Usage Desc #003: RUH Attributes: Unused 00:11:22.679 RUH Usage Desc #004: RUH Attributes: Unused 00:11:22.679 RUH Usage Desc #005: RUH Attributes: Unused 00:11:22.679 RUH Usage Desc #006: RUH Attributes: Unused 00:11:22.679 RUH Usage Desc #007: RUH Attributes: Unused 00:11:22.679 00:11:22.679 FDP statistics log page 00:11:22.679 ======================= 00:11:22.679 Host bytes with metadata written: 441032704 00:11:22.679 Medi[2024-07-15 19:33:13.306005] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 70086 terminated unexpected 00:11:22.679 a bytes with metadata written: 441106432 00:11:22.679 Media bytes erased: 0 00:11:22.679 00:11:22.679 FDP events log page 00:11:22.679 =================== 00:11:22.679 Number of FDP events: 0 00:11:22.679 00:11:22.679 NVM Specific Namespace Data 00:11:22.679 =========================== 00:11:22.679 Logical Block Storage Tag Mask: 0 00:11:22.679 Protection Information Capabilities: 00:11:22.679 16b Guard Protection Information Storage Tag Support: No 00:11:22.679 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.679 Storage Tag Check Read Support: No 00:11:22.679 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.679 ===================================================== 00:11:22.679 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:22.679 ===================================================== 00:11:22.679 Controller Capabilities/Features 00:11:22.679 ================================ 00:11:22.679 Vendor ID: 1b36 00:11:22.679 Subsystem Vendor ID: 1af4 00:11:22.679 Serial Number: 12342 00:11:22.679 Model Number: QEMU NVMe Ctrl 00:11:22.679 Firmware Version: 8.0.0 00:11:22.679 Recommended Arb Burst: 6 00:11:22.679 IEEE OUI Identifier: 00 54 52 00:11:22.679 Multi-path I/O 00:11:22.679 May have multiple subsystem ports: No 00:11:22.679 May have multiple controllers: No 00:11:22.679 Associated with SR-IOV VF: No 00:11:22.679 Max Data Transfer Size: 524288 00:11:22.679 Max Number of Namespaces: 256 00:11:22.679 Max Number of I/O Queues: 64 00:11:22.679 NVMe Specification Version (VS): 1.4 00:11:22.679 NVMe Specification Version (Identify): 1.4 00:11:22.679 Maximum Queue Entries: 2048 00:11:22.679 Contiguous Queues Required: Yes 00:11:22.679 Arbitration Mechanisms Supported 00:11:22.679 Weighted Round Robin: Not Supported 00:11:22.679 Vendor Specific: Not Supported 00:11:22.679 Reset Timeout: 7500 ms 00:11:22.679 Doorbell Stride: 4 bytes 00:11:22.679 NVM Subsystem Reset: Not Supported 00:11:22.679 Command Sets Supported 00:11:22.679 NVM Command Set: Supported 00:11:22.679 Boot Partition: Not Supported 00:11:22.679 Memory Page Size Minimum: 4096 bytes 00:11:22.679 Memory Page Size Maximum: 65536 bytes 00:11:22.679 Persistent Memory Region: Not Supported 00:11:22.679 Optional Asynchronous Events Supported 00:11:22.679 Namespace Attribute Notices: Supported 00:11:22.679 Firmware Activation Notices: Not Supported 00:11:22.679 ANA Change Notices: Not Supported 00:11:22.679 PLE Aggregate Log Change Notices: Not Supported 00:11:22.679 LBA Status Info Alert Notices: Not Supported 00:11:22.679 EGE Aggregate Log Change Notices: Not Supported 00:11:22.679 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.679 Zone Descriptor Change Notices: Not Supported 00:11:22.679 Discovery Log Change Notices: Not Supported 00:11:22.679 Controller Attributes 00:11:22.679 128-bit Host Identifier: Not Supported 00:11:22.679 Non-Operational Permissive Mode: Not Supported 00:11:22.679 NVM Sets: Not Supported 00:11:22.679 Read Recovery Levels: Not Supported 00:11:22.679 Endurance Groups: Not Supported 00:11:22.679 Predictable Latency Mode: Not Supported 00:11:22.679 Traffic Based Keep ALive: Not Supported 00:11:22.679 Namespace Granularity: Not Supported 00:11:22.679 SQ Associations: Not Supported 00:11:22.679 UUID List: Not Supported 00:11:22.679 Multi-Domain Subsystem: Not Supported 00:11:22.679 Fixed Capacity Management: Not Supported 00:11:22.679 Variable Capacity Management: Not Supported 00:11:22.679 Delete Endurance Group: Not Supported 00:11:22.679 Delete NVM Set: Not Supported 00:11:22.679 Extended LBA Formats Supported: Supported 00:11:22.679 Flexible Data Placement Supported: Not Supported 00:11:22.679 00:11:22.679 Controller Memory Buffer Support 00:11:22.679 ================================ 00:11:22.679 Supported: No 00:11:22.679 00:11:22.679 Persistent Memory Region Support 00:11:22.679 ================================ 00:11:22.679 Supported: No 00:11:22.679 00:11:22.679 Admin Command Set Attributes 00:11:22.679 ============================ 00:11:22.679 Security Send/Receive: Not Supported 00:11:22.679 Format NVM: Supported 00:11:22.679 Firmware Activate/Download: Not Supported 00:11:22.679 Namespace Management: Supported 00:11:22.679 Device Self-Test: Not Supported 00:11:22.679 Directives: Supported 00:11:22.679 NVMe-MI: Not Supported 00:11:22.679 Virtualization Management: Not Supported 00:11:22.679 Doorbell Buffer Config: Supported 00:11:22.679 Get LBA Status Capability: Not Supported 00:11:22.679 Command & Feature Lockdown Capability: Not Supported 00:11:22.679 Abort Command Limit: 4 00:11:22.679 Async Event Request Limit: 4 00:11:22.679 Number of Firmware Slots: N/A 00:11:22.679 Firmware Slot 1 Read-Only: N/A 00:11:22.679 Firmware Activation Without Reset: N/A 00:11:22.679 Multiple Update Detection Support: N/A 00:11:22.679 Firmware Update Granularity: No Information Provided 00:11:22.679 Per-Namespace SMART Log: Yes 00:11:22.680 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.680 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:22.680 Command Effects Log Page: Supported 00:11:22.680 Get Log Page Extended Data: Supported 00:11:22.680 Telemetry Log Pages: Not Supported 00:11:22.680 Persistent Event Log Pages: Not Supported 00:11:22.680 Supported Log Pages Log Page: May Support 00:11:22.680 Commands Supported & Effects Log Page: Not Supported 00:11:22.680 Feature Identifiers & Effects Log Page:May Support 00:11:22.680 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.680 Data Area 4 for Telemetry Log: Not Supported 00:11:22.680 Error Log Page Entries Supported: 1 00:11:22.680 Keep Alive: Not Supported 00:11:22.680 00:11:22.680 NVM Command Set Attributes 00:11:22.680 ========================== 00:11:22.680 Submission Queue Entry Size 00:11:22.680 Max: 64 00:11:22.680 Min: 64 00:11:22.680 Completion Queue Entry Size 00:11:22.680 Max: 16 00:11:22.680 Min: 16 00:11:22.680 Number of Namespaces: 256 00:11:22.680 Compare Command: Supported 00:11:22.680 Write Uncorrectable Command: Not Supported 00:11:22.680 Dataset Management Command: Supported 00:11:22.680 Write Zeroes Command: Supported 00:11:22.680 Set Features Save Field: Supported 00:11:22.680 Reservations: Not Supported 00:11:22.680 Timestamp: Supported 00:11:22.680 Copy: Supported 00:11:22.680 Volatile Write Cache: Present 00:11:22.680 Atomic Write Unit (Normal): 1 00:11:22.680 Atomic Write Unit (PFail): 1 00:11:22.680 Atomic Compare & Write Unit: 1 00:11:22.680 Fused Compare & Write: Not Supported 00:11:22.680 Scatter-Gather List 00:11:22.680 SGL Command Set: Supported 00:11:22.680 SGL Keyed: Not Supported 00:11:22.680 SGL Bit Bucket Descriptor: Not Supported 00:11:22.680 SGL Metadata Pointer: Not Supported 00:11:22.680 Oversized SGL: Not Supported 00:11:22.680 SGL Metadata Address: Not Supported 00:11:22.680 SGL Offset: Not Supported 00:11:22.680 Transport SGL Data Block: Not Supported 00:11:22.680 Replay Protected Memory Block: Not Supported 00:11:22.680 00:11:22.680 Firmware Slot Information 00:11:22.680 ========================= 00:11:22.680 Active slot: 1 00:11:22.680 Slot 1 Firmware Revision: 1.0 00:11:22.680 00:11:22.680 00:11:22.680 Commands Supported and Effects 00:11:22.680 ============================== 00:11:22.680 Admin Commands 00:11:22.680 -------------- 00:11:22.680 Delete I/O Submission Queue (00h): Supported 00:11:22.680 Create I/O Submission Queue (01h): Supported 00:11:22.680 Get Log Page (02h): Supported 00:11:22.680 Delete I/O Completion Queue (04h): Supported 00:11:22.680 Create I/O Completion Queue (05h): Supported 00:11:22.680 Identify (06h): Supported 00:11:22.680 Abort (08h): Supported 00:11:22.680 Set Features (09h): Supported 00:11:22.680 Get Features (0Ah): Supported 00:11:22.680 Asynchronous Event Request (0Ch): Supported 00:11:22.680 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.680 Directive Send (19h): Supported 00:11:22.680 Directive Receive (1Ah): Supported 00:11:22.680 Virtualization Management (1Ch): Supported 00:11:22.680 Doorbell Buffer Config (7Ch): Supported 00:11:22.680 Format NVM (80h): Supported LBA-Change 00:11:22.680 I/O Commands 00:11:22.680 ------------ 00:11:22.680 Flush (00h): Supported LBA-Change 00:11:22.680 Write (01h): Supported LBA-Change 00:11:22.680 Read (02h): Supported 00:11:22.680 Compare (05h): Supported 00:11:22.680 Write Zeroes (08h): Supported LBA-Change 00:11:22.680 Dataset Management (09h): Supported LBA-Change 00:11:22.680 Unknown (0Ch): Supported 00:11:22.680 Unknown (12h): Supported 00:11:22.680 Copy (19h): Supported LBA-Change 00:11:22.680 Unknown (1Dh): Supported LBA-Change 00:11:22.680 00:11:22.680 Error Log 00:11:22.680 ========= 00:11:22.680 00:11:22.680 Arbitration 00:11:22.680 =========== 00:11:22.680 Arbitration Burst: no limit 00:11:22.680 00:11:22.680 Power Management 00:11:22.680 ================ 00:11:22.680 Number of Power States: 1 00:11:22.680 Current Power State: Power State #0 00:11:22.680 Power State #0: 00:11:22.680 Max Power: 25.00 W 00:11:22.680 Non-Operational State: Operational 00:11:22.680 Entry Latency: 16 microseconds 00:11:22.680 Exit Latency: 4 microseconds 00:11:22.680 Relative Read Throughput: 0 00:11:22.680 Relative Read Latency: 0 00:11:22.680 Relative Write Throughput: 0 00:11:22.680 Relative Write Latency: 0 00:11:22.680 Idle Power: Not Reported 00:11:22.680 Active Power: Not Reported 00:11:22.680 Non-Operational Permissive Mode: Not Supported 00:11:22.680 00:11:22.680 Health Information 00:11:22.680 ================== 00:11:22.680 Critical Warnings: 00:11:22.680 Available Spare Space: OK 00:11:22.680 Temperature: OK 00:11:22.680 Device Reliability: OK 00:11:22.680 Read Only: No 00:11:22.680 Volatile Memory Backup: OK 00:11:22.680 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.680 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.680 Available Spare: 0% 00:11:22.680 Available Spare Threshold: 0% 00:11:22.680 Life Percentage Used: 0% 00:11:22.680 Data Units Read: 2282 00:11:22.680 Data Units Written: 1962 00:11:22.680 Host Read Commands: 104809 00:11:22.680 Host Write Commands: 100579 00:11:22.680 Controller Busy Time: 0 minutes 00:11:22.680 Power Cycles: 0 00:11:22.680 Power On Hours: 0 hours 00:11:22.680 Unsafe Shutdowns: 0 00:11:22.680 Unrecoverable Media Errors: 0 00:11:22.680 Lifetime Error Log Entries: 0 00:11:22.680 Warning Temperature Time: 0 minutes 00:11:22.680 Critical Temperature Time: 0 minutes 00:11:22.680 00:11:22.680 Number of Queues 00:11:22.680 ================ 00:11:22.680 Number of I/O Submission Queues: 64 00:11:22.680 Number of I/O Completion Queues: 64 00:11:22.680 00:11:22.680 ZNS Specific Controller Data 00:11:22.680 ============================ 00:11:22.680 Zone Append Size Limit: 0 00:11:22.680 00:11:22.680 00:11:22.680 Active Namespaces 00:11:22.680 ================= 00:11:22.680 Namespace ID:1 00:11:22.680 Error Recovery Timeout: Unlimited 00:11:22.680 Command Set Identifier: NVM (00h) 00:11:22.680 Deallocate: Supported 00:11:22.680 Deallocated/Unwritten Error: Supported 00:11:22.680 Deallocated Read Value: All 0x00 00:11:22.680 Deallocate in Write Zeroes: Not Supported 00:11:22.680 Deallocated Guard Field: 0xFFFF 00:11:22.680 Flush: Supported 00:11:22.680 Reservation: Not Supported 00:11:22.680 Namespace Sharing Capabilities: Private 00:11:22.680 Size (in LBAs): 1048576 (4GiB) 00:11:22.680 Capacity (in LBAs): 1048576 (4GiB) 00:11:22.680 Utilization (in LBAs): 1048576 (4GiB) 00:11:22.680 Thin Provisioning: Not Supported 00:11:22.680 Per-NS Atomic Units: No 00:11:22.680 Maximum Single Source Range Length: 128 00:11:22.680 Maximum Copy Length: 128 00:11:22.680 Maximum Source Range Count: 128 00:11:22.680 NGUID/EUI64 Never Reused: No 00:11:22.680 Namespace Write Protected: No 00:11:22.680 Number of LBA Formats: 8 00:11:22.680 Current LBA Format: LBA Format #04 00:11:22.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.680 00:11:22.680 NVM Specific Namespace Data 00:11:22.680 =========================== 00:11:22.680 Logical Block Storage Tag Mask: 0 00:11:22.680 Protection Information Capabilities: 00:11:22.680 16b Guard Protection Information Storage Tag Support: No 00:11:22.680 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.680 Storage Tag Check Read Support: No 00:11:22.680 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Namespace ID:2 00:11:22.680 Error Recovery Timeout: Unlimited 00:11:22.680 Command Set Identifier: NVM (00h) 00:11:22.680 Deallocate: Supported 00:11:22.680 Deallocated/Unwritten Error: Supported 00:11:22.680 Deallocated Read Value: All 0x00 00:11:22.680 Deallocate in Write Zeroes: Not Supported 00:11:22.680 Deallocated Guard Field: 0xFFFF 00:11:22.680 Flush: Supported 00:11:22.680 Reservation: Not Supported 00:11:22.680 Namespace Sharing Capabilities: Private 00:11:22.680 Size (in LBAs): 1048576 (4GiB) 00:11:22.680 Capacity (in LBAs): 1048576 (4GiB) 00:11:22.680 Utilization (in LBAs): 1048576 (4GiB) 00:11:22.680 Thin Provisioning: Not Supported 00:11:22.680 Per-NS Atomic Units: No 00:11:22.680 Maximum Single Source Range Length: 128 00:11:22.680 Maximum Copy Length: 128 00:11:22.680 Maximum Source Range Count: 128 00:11:22.680 NGUID/EUI64 Never Reused: No 00:11:22.680 Namespace Write Protected: No 00:11:22.680 Number of LBA Formats: 8 00:11:22.680 Current LBA Format: LBA Format #04 00:11:22.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.680 00:11:22.680 NVM Specific Namespace Data 00:11:22.680 =========================== 00:11:22.680 Logical Block Storage Tag Mask: 0 00:11:22.680 Protection Information Capabilities: 00:11:22.680 16b Guard Protection Information Storage Tag Support: No 00:11:22.680 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.680 Storage Tag Check Read Support: No 00:11:22.680 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.680 Namespace ID:3 00:11:22.680 Error Recovery Timeout: Unlimited 00:11:22.680 Command Set Identifier: NVM (00h) 00:11:22.681 Deallocate: Supported 00:11:22.681 Deallocated/Unwritten Error: Supported 00:11:22.681 Deallocated Read Value: All 0x00 00:11:22.681 Deallocate in Write Zeroes: Not Supported 00:11:22.681 Deallocated Guard Field: 0xFFFF 00:11:22.681 Flush: Supported 00:11:22.681 Reservation: Not Supported 00:11:22.681 Namespace Sharing Capabilities: Private 00:11:22.681 Size (in LBAs): 1048576 (4GiB) 00:11:22.681 Capacity (in LBAs): 1048576 (4GiB) 00:11:22.681 Utilization (in LBAs): 1048576 (4GiB) 00:11:22.681 Thin Provisioning: Not Supported 00:11:22.681 Per-NS Atomic Units: No 00:11:22.681 Maximum Single Source Range Length: 128 00:11:22.681 Maximum Copy Length: 128 00:11:22.681 Maximum Source Range Count: 128 00:11:22.681 NGUID/EUI64 Never Reused: No 00:11:22.681 Namespace Write Protected: No 00:11:22.681 Number of LBA Formats: 8 00:11:22.681 Current LBA Format: LBA Format #04 00:11:22.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.681 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.681 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.681 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.681 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.681 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.681 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.681 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.681 00:11:22.681 NVM Specific Namespace Data 00:11:22.681 =========================== 00:11:22.681 Logical Block Storage Tag Mask: 0 00:11:22.681 Protection Information Capabilities: 00:11:22.681 16b Guard Protection Information Storage Tag Support: No 00:11:22.681 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.681 Storage Tag Check Read Support: No 00:11:22.681 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.681 19:33:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:22.681 19:33:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:22.947 ===================================================== 00:11:22.947 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:22.947 ===================================================== 00:11:22.947 Controller Capabilities/Features 00:11:22.947 ================================ 00:11:22.947 Vendor ID: 1b36 00:11:22.947 Subsystem Vendor ID: 1af4 00:11:22.947 Serial Number: 12340 00:11:22.947 Model Number: QEMU NVMe Ctrl 00:11:22.947 Firmware Version: 8.0.0 00:11:22.947 Recommended Arb Burst: 6 00:11:22.947 IEEE OUI Identifier: 00 54 52 00:11:22.947 Multi-path I/O 00:11:22.947 May have multiple subsystem ports: No 00:11:22.947 May have multiple controllers: No 00:11:22.947 Associated with SR-IOV VF: No 00:11:22.947 Max Data Transfer Size: 524288 00:11:22.947 Max Number of Namespaces: 256 00:11:22.947 Max Number of I/O Queues: 64 00:11:22.947 NVMe Specification Version (VS): 1.4 00:11:22.947 NVMe Specification Version (Identify): 1.4 00:11:22.947 Maximum Queue Entries: 2048 00:11:22.947 Contiguous Queues Required: Yes 00:11:22.947 Arbitration Mechanisms Supported 00:11:22.947 Weighted Round Robin: Not Supported 00:11:22.947 Vendor Specific: Not Supported 00:11:22.947 Reset Timeout: 7500 ms 00:11:22.947 Doorbell Stride: 4 bytes 00:11:22.947 NVM Subsystem Reset: Not Supported 00:11:22.947 Command Sets Supported 00:11:22.947 NVM Command Set: Supported 00:11:22.947 Boot Partition: Not Supported 00:11:22.947 Memory Page Size Minimum: 4096 bytes 00:11:22.947 Memory Page Size Maximum: 65536 bytes 00:11:22.947 Persistent Memory Region: Not Supported 00:11:22.947 Optional Asynchronous Events Supported 00:11:22.947 Namespace Attribute Notices: Supported 00:11:22.947 Firmware Activation Notices: Not Supported 00:11:22.947 ANA Change Notices: Not Supported 00:11:22.947 PLE Aggregate Log Change Notices: Not Supported 00:11:22.947 LBA Status Info Alert Notices: Not Supported 00:11:22.947 EGE Aggregate Log Change Notices: Not Supported 00:11:22.947 Normal NVM Subsystem Shutdown event: Not Supported 00:11:22.947 Zone Descriptor Change Notices: Not Supported 00:11:22.947 Discovery Log Change Notices: Not Supported 00:11:22.947 Controller Attributes 00:11:22.947 128-bit Host Identifier: Not Supported 00:11:22.947 Non-Operational Permissive Mode: Not Supported 00:11:22.947 NVM Sets: Not Supported 00:11:22.947 Read Recovery Levels: Not Supported 00:11:22.947 Endurance Groups: Not Supported 00:11:22.947 Predictable Latency Mode: Not Supported 00:11:22.947 Traffic Based Keep ALive: Not Supported 00:11:22.947 Namespace Granularity: Not Supported 00:11:22.947 SQ Associations: Not Supported 00:11:22.947 UUID List: Not Supported 00:11:22.947 Multi-Domain Subsystem: Not Supported 00:11:22.947 Fixed Capacity Management: Not Supported 00:11:22.947 Variable Capacity Management: Not Supported 00:11:22.947 Delete Endurance Group: Not Supported 00:11:22.947 Delete NVM Set: Not Supported 00:11:22.947 Extended LBA Formats Supported: Supported 00:11:22.947 Flexible Data Placement Supported: Not Supported 00:11:22.947 00:11:22.947 Controller Memory Buffer Support 00:11:22.947 ================================ 00:11:22.947 Supported: No 00:11:22.947 00:11:22.947 Persistent Memory Region Support 00:11:22.947 ================================ 00:11:22.947 Supported: No 00:11:22.947 00:11:22.947 Admin Command Set Attributes 00:11:22.947 ============================ 00:11:22.947 Security Send/Receive: Not Supported 00:11:22.947 Format NVM: Supported 00:11:22.947 Firmware Activate/Download: Not Supported 00:11:22.947 Namespace Management: Supported 00:11:22.947 Device Self-Test: Not Supported 00:11:22.947 Directives: Supported 00:11:22.947 NVMe-MI: Not Supported 00:11:22.947 Virtualization Management: Not Supported 00:11:22.947 Doorbell Buffer Config: Supported 00:11:22.947 Get LBA Status Capability: Not Supported 00:11:22.947 Command & Feature Lockdown Capability: Not Supported 00:11:22.947 Abort Command Limit: 4 00:11:22.947 Async Event Request Limit: 4 00:11:22.947 Number of Firmware Slots: N/A 00:11:22.947 Firmware Slot 1 Read-Only: N/A 00:11:22.947 Firmware Activation Without Reset: N/A 00:11:22.947 Multiple Update Detection Support: N/A 00:11:22.947 Firmware Update Granularity: No Information Provided 00:11:22.947 Per-Namespace SMART Log: Yes 00:11:22.947 Asymmetric Namespace Access Log Page: Not Supported 00:11:22.947 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:22.947 Command Effects Log Page: Supported 00:11:22.947 Get Log Page Extended Data: Supported 00:11:22.947 Telemetry Log Pages: Not Supported 00:11:22.947 Persistent Event Log Pages: Not Supported 00:11:22.947 Supported Log Pages Log Page: May Support 00:11:22.947 Commands Supported & Effects Log Page: Not Supported 00:11:22.947 Feature Identifiers & Effects Log Page:May Support 00:11:22.947 NVMe-MI Commands & Effects Log Page: May Support 00:11:22.947 Data Area 4 for Telemetry Log: Not Supported 00:11:22.947 Error Log Page Entries Supported: 1 00:11:22.947 Keep Alive: Not Supported 00:11:22.947 00:11:22.947 NVM Command Set Attributes 00:11:22.947 ========================== 00:11:22.947 Submission Queue Entry Size 00:11:22.947 Max: 64 00:11:22.947 Min: 64 00:11:22.947 Completion Queue Entry Size 00:11:22.947 Max: 16 00:11:22.947 Min: 16 00:11:22.947 Number of Namespaces: 256 00:11:22.947 Compare Command: Supported 00:11:22.947 Write Uncorrectable Command: Not Supported 00:11:22.947 Dataset Management Command: Supported 00:11:22.947 Write Zeroes Command: Supported 00:11:22.947 Set Features Save Field: Supported 00:11:22.947 Reservations: Not Supported 00:11:22.947 Timestamp: Supported 00:11:22.947 Copy: Supported 00:11:22.947 Volatile Write Cache: Present 00:11:22.947 Atomic Write Unit (Normal): 1 00:11:22.947 Atomic Write Unit (PFail): 1 00:11:22.947 Atomic Compare & Write Unit: 1 00:11:22.947 Fused Compare & Write: Not Supported 00:11:22.947 Scatter-Gather List 00:11:22.947 SGL Command Set: Supported 00:11:22.947 SGL Keyed: Not Supported 00:11:22.947 SGL Bit Bucket Descriptor: Not Supported 00:11:22.947 SGL Metadata Pointer: Not Supported 00:11:22.947 Oversized SGL: Not Supported 00:11:22.947 SGL Metadata Address: Not Supported 00:11:22.947 SGL Offset: Not Supported 00:11:22.947 Transport SGL Data Block: Not Supported 00:11:22.947 Replay Protected Memory Block: Not Supported 00:11:22.947 00:11:22.947 Firmware Slot Information 00:11:22.947 ========================= 00:11:22.947 Active slot: 1 00:11:22.947 Slot 1 Firmware Revision: 1.0 00:11:22.947 00:11:22.947 00:11:22.947 Commands Supported and Effects 00:11:22.947 ============================== 00:11:22.947 Admin Commands 00:11:22.947 -------------- 00:11:22.947 Delete I/O Submission Queue (00h): Supported 00:11:22.947 Create I/O Submission Queue (01h): Supported 00:11:22.947 Get Log Page (02h): Supported 00:11:22.947 Delete I/O Completion Queue (04h): Supported 00:11:22.947 Create I/O Completion Queue (05h): Supported 00:11:22.947 Identify (06h): Supported 00:11:22.947 Abort (08h): Supported 00:11:22.947 Set Features (09h): Supported 00:11:22.947 Get Features (0Ah): Supported 00:11:22.947 Asynchronous Event Request (0Ch): Supported 00:11:22.947 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:22.947 Directive Send (19h): Supported 00:11:22.947 Directive Receive (1Ah): Supported 00:11:22.947 Virtualization Management (1Ch): Supported 00:11:22.947 Doorbell Buffer Config (7Ch): Supported 00:11:22.947 Format NVM (80h): Supported LBA-Change 00:11:22.947 I/O Commands 00:11:22.947 ------------ 00:11:22.947 Flush (00h): Supported LBA-Change 00:11:22.947 Write (01h): Supported LBA-Change 00:11:22.947 Read (02h): Supported 00:11:22.947 Compare (05h): Supported 00:11:22.947 Write Zeroes (08h): Supported LBA-Change 00:11:22.947 Dataset Management (09h): Supported LBA-Change 00:11:22.947 Unknown (0Ch): Supported 00:11:22.947 Unknown (12h): Supported 00:11:22.948 Copy (19h): Supported LBA-Change 00:11:22.948 Unknown (1Dh): Supported LBA-Change 00:11:22.948 00:11:22.948 Error Log 00:11:22.948 ========= 00:11:22.948 00:11:22.948 Arbitration 00:11:22.948 =========== 00:11:22.948 Arbitration Burst: no limit 00:11:22.948 00:11:22.948 Power Management 00:11:22.948 ================ 00:11:22.948 Number of Power States: 1 00:11:22.948 Current Power State: Power State #0 00:11:22.948 Power State #0: 00:11:22.948 Max Power: 25.00 W 00:11:22.948 Non-Operational State: Operational 00:11:22.948 Entry Latency: 16 microseconds 00:11:22.948 Exit Latency: 4 microseconds 00:11:22.948 Relative Read Throughput: 0 00:11:22.948 Relative Read Latency: 0 00:11:22.948 Relative Write Throughput: 0 00:11:22.948 Relative Write Latency: 0 00:11:22.948 Idle Power: Not Reported 00:11:22.948 Active Power: Not Reported 00:11:22.948 Non-Operational Permissive Mode: Not Supported 00:11:22.948 00:11:22.948 Health Information 00:11:22.948 ================== 00:11:22.948 Critical Warnings: 00:11:22.948 Available Spare Space: OK 00:11:22.948 Temperature: OK 00:11:22.948 Device Reliability: OK 00:11:22.948 Read Only: No 00:11:22.948 Volatile Memory Backup: OK 00:11:22.948 Current Temperature: 323 Kelvin (50 Celsius) 00:11:22.948 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:22.948 Available Spare: 0% 00:11:22.948 Available Spare Threshold: 0% 00:11:22.948 Life Percentage Used: 0% 00:11:22.948 Data Units Read: 1116 00:11:22.948 Data Units Written: 949 00:11:22.948 Host Read Commands: 49730 00:11:22.948 Host Write Commands: 48212 00:11:22.948 Controller Busy Time: 0 minutes 00:11:22.948 Power Cycles: 0 00:11:22.948 Power On Hours: 0 hours 00:11:22.948 Unsafe Shutdowns: 0 00:11:22.948 Unrecoverable Media Errors: 0 00:11:22.948 Lifetime Error Log Entries: 0 00:11:22.948 Warning Temperature Time: 0 minutes 00:11:22.948 Critical Temperature Time: 0 minutes 00:11:22.948 00:11:22.948 Number of Queues 00:11:22.948 ================ 00:11:22.948 Number of I/O Submission Queues: 64 00:11:22.948 Number of I/O Completion Queues: 64 00:11:22.948 00:11:22.948 ZNS Specific Controller Data 00:11:22.948 ============================ 00:11:22.948 Zone Append Size Limit: 0 00:11:22.948 00:11:22.948 00:11:22.948 Active Namespaces 00:11:22.948 ================= 00:11:22.948 Namespace ID:1 00:11:22.948 Error Recovery Timeout: Unlimited 00:11:22.948 Command Set Identifier: NVM (00h) 00:11:22.948 Deallocate: Supported 00:11:22.948 Deallocated/Unwritten Error: Supported 00:11:22.948 Deallocated Read Value: All 0x00 00:11:22.948 Deallocate in Write Zeroes: Not Supported 00:11:22.948 Deallocated Guard Field: 0xFFFF 00:11:22.948 Flush: Supported 00:11:22.948 Reservation: Not Supported 00:11:22.948 Metadata Transferred as: Separate Metadata Buffer 00:11:22.948 Namespace Sharing Capabilities: Private 00:11:22.948 Size (in LBAs): 1548666 (5GiB) 00:11:22.948 Capacity (in LBAs): 1548666 (5GiB) 00:11:22.948 Utilization (in LBAs): 1548666 (5GiB) 00:11:22.948 Thin Provisioning: Not Supported 00:11:22.948 Per-NS Atomic Units: No 00:11:22.948 Maximum Single Source Range Length: 128 00:11:22.948 Maximum Copy Length: 128 00:11:22.948 Maximum Source Range Count: 128 00:11:22.948 NGUID/EUI64 Never Reused: No 00:11:22.948 Namespace Write Protected: No 00:11:22.948 Number of LBA Formats: 8 00:11:22.948 Current LBA Format: LBA Format #07 00:11:22.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:22.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:22.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:22.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:22.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:22.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:22.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:22.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:22.948 00:11:22.948 NVM Specific Namespace Data 00:11:22.948 =========================== 00:11:22.948 Logical Block Storage Tag Mask: 0 00:11:22.948 Protection Information Capabilities: 00:11:22.948 16b Guard Protection Information Storage Tag Support: No 00:11:22.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:22.948 Storage Tag Check Read Support: No 00:11:22.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:22.948 19:33:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:22.948 19:33:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:23.533 ===================================================== 00:11:23.533 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:23.533 ===================================================== 00:11:23.533 Controller Capabilities/Features 00:11:23.534 ================================ 00:11:23.534 Vendor ID: 1b36 00:11:23.534 Subsystem Vendor ID: 1af4 00:11:23.534 Serial Number: 12341 00:11:23.534 Model Number: QEMU NVMe Ctrl 00:11:23.534 Firmware Version: 8.0.0 00:11:23.534 Recommended Arb Burst: 6 00:11:23.534 IEEE OUI Identifier: 00 54 52 00:11:23.534 Multi-path I/O 00:11:23.534 May have multiple subsystem ports: No 00:11:23.534 May have multiple controllers: No 00:11:23.534 Associated with SR-IOV VF: No 00:11:23.534 Max Data Transfer Size: 524288 00:11:23.534 Max Number of Namespaces: 256 00:11:23.534 Max Number of I/O Queues: 64 00:11:23.534 NVMe Specification Version (VS): 1.4 00:11:23.534 NVMe Specification Version (Identify): 1.4 00:11:23.534 Maximum Queue Entries: 2048 00:11:23.534 Contiguous Queues Required: Yes 00:11:23.534 Arbitration Mechanisms Supported 00:11:23.534 Weighted Round Robin: Not Supported 00:11:23.534 Vendor Specific: Not Supported 00:11:23.534 Reset Timeout: 7500 ms 00:11:23.534 Doorbell Stride: 4 bytes 00:11:23.534 NVM Subsystem Reset: Not Supported 00:11:23.534 Command Sets Supported 00:11:23.534 NVM Command Set: Supported 00:11:23.534 Boot Partition: Not Supported 00:11:23.534 Memory Page Size Minimum: 4096 bytes 00:11:23.534 Memory Page Size Maximum: 65536 bytes 00:11:23.534 Persistent Memory Region: Not Supported 00:11:23.534 Optional Asynchronous Events Supported 00:11:23.534 Namespace Attribute Notices: Supported 00:11:23.534 Firmware Activation Notices: Not Supported 00:11:23.534 ANA Change Notices: Not Supported 00:11:23.534 PLE Aggregate Log Change Notices: Not Supported 00:11:23.534 LBA Status Info Alert Notices: Not Supported 00:11:23.534 EGE Aggregate Log Change Notices: Not Supported 00:11:23.534 Normal NVM Subsystem Shutdown event: Not Supported 00:11:23.534 Zone Descriptor Change Notices: Not Supported 00:11:23.534 Discovery Log Change Notices: Not Supported 00:11:23.534 Controller Attributes 00:11:23.534 128-bit Host Identifier: Not Supported 00:11:23.534 Non-Operational Permissive Mode: Not Supported 00:11:23.534 NVM Sets: Not Supported 00:11:23.534 Read Recovery Levels: Not Supported 00:11:23.534 Endurance Groups: Not Supported 00:11:23.534 Predictable Latency Mode: Not Supported 00:11:23.534 Traffic Based Keep ALive: Not Supported 00:11:23.534 Namespace Granularity: Not Supported 00:11:23.534 SQ Associations: Not Supported 00:11:23.534 UUID List: Not Supported 00:11:23.534 Multi-Domain Subsystem: Not Supported 00:11:23.534 Fixed Capacity Management: Not Supported 00:11:23.534 Variable Capacity Management: Not Supported 00:11:23.534 Delete Endurance Group: Not Supported 00:11:23.534 Delete NVM Set: Not Supported 00:11:23.534 Extended LBA Formats Supported: Supported 00:11:23.534 Flexible Data Placement Supported: Not Supported 00:11:23.534 00:11:23.534 Controller Memory Buffer Support 00:11:23.534 ================================ 00:11:23.534 Supported: No 00:11:23.534 00:11:23.534 Persistent Memory Region Support 00:11:23.534 ================================ 00:11:23.534 Supported: No 00:11:23.534 00:11:23.534 Admin Command Set Attributes 00:11:23.534 ============================ 00:11:23.534 Security Send/Receive: Not Supported 00:11:23.534 Format NVM: Supported 00:11:23.534 Firmware Activate/Download: Not Supported 00:11:23.534 Namespace Management: Supported 00:11:23.534 Device Self-Test: Not Supported 00:11:23.534 Directives: Supported 00:11:23.534 NVMe-MI: Not Supported 00:11:23.534 Virtualization Management: Not Supported 00:11:23.534 Doorbell Buffer Config: Supported 00:11:23.534 Get LBA Status Capability: Not Supported 00:11:23.534 Command & Feature Lockdown Capability: Not Supported 00:11:23.534 Abort Command Limit: 4 00:11:23.534 Async Event Request Limit: 4 00:11:23.534 Number of Firmware Slots: N/A 00:11:23.534 Firmware Slot 1 Read-Only: N/A 00:11:23.534 Firmware Activation Without Reset: N/A 00:11:23.534 Multiple Update Detection Support: N/A 00:11:23.534 Firmware Update Granularity: No Information Provided 00:11:23.534 Per-Namespace SMART Log: Yes 00:11:23.534 Asymmetric Namespace Access Log Page: Not Supported 00:11:23.534 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:23.534 Command Effects Log Page: Supported 00:11:23.534 Get Log Page Extended Data: Supported 00:11:23.534 Telemetry Log Pages: Not Supported 00:11:23.534 Persistent Event Log Pages: Not Supported 00:11:23.534 Supported Log Pages Log Page: May Support 00:11:23.534 Commands Supported & Effects Log Page: Not Supported 00:11:23.534 Feature Identifiers & Effects Log Page:May Support 00:11:23.534 NVMe-MI Commands & Effects Log Page: May Support 00:11:23.534 Data Area 4 for Telemetry Log: Not Supported 00:11:23.534 Error Log Page Entries Supported: 1 00:11:23.534 Keep Alive: Not Supported 00:11:23.534 00:11:23.534 NVM Command Set Attributes 00:11:23.534 ========================== 00:11:23.534 Submission Queue Entry Size 00:11:23.534 Max: 64 00:11:23.534 Min: 64 00:11:23.534 Completion Queue Entry Size 00:11:23.534 Max: 16 00:11:23.534 Min: 16 00:11:23.534 Number of Namespaces: 256 00:11:23.534 Compare Command: Supported 00:11:23.534 Write Uncorrectable Command: Not Supported 00:11:23.534 Dataset Management Command: Supported 00:11:23.534 Write Zeroes Command: Supported 00:11:23.534 Set Features Save Field: Supported 00:11:23.534 Reservations: Not Supported 00:11:23.534 Timestamp: Supported 00:11:23.534 Copy: Supported 00:11:23.534 Volatile Write Cache: Present 00:11:23.534 Atomic Write Unit (Normal): 1 00:11:23.534 Atomic Write Unit (PFail): 1 00:11:23.534 Atomic Compare & Write Unit: 1 00:11:23.534 Fused Compare & Write: Not Supported 00:11:23.534 Scatter-Gather List 00:11:23.534 SGL Command Set: Supported 00:11:23.534 SGL Keyed: Not Supported 00:11:23.534 SGL Bit Bucket Descriptor: Not Supported 00:11:23.534 SGL Metadata Pointer: Not Supported 00:11:23.534 Oversized SGL: Not Supported 00:11:23.534 SGL Metadata Address: Not Supported 00:11:23.534 SGL Offset: Not Supported 00:11:23.534 Transport SGL Data Block: Not Supported 00:11:23.534 Replay Protected Memory Block: Not Supported 00:11:23.534 00:11:23.534 Firmware Slot Information 00:11:23.534 ========================= 00:11:23.534 Active slot: 1 00:11:23.534 Slot 1 Firmware Revision: 1.0 00:11:23.534 00:11:23.534 00:11:23.534 Commands Supported and Effects 00:11:23.534 ============================== 00:11:23.534 Admin Commands 00:11:23.534 -------------- 00:11:23.534 Delete I/O Submission Queue (00h): Supported 00:11:23.534 Create I/O Submission Queue (01h): Supported 00:11:23.534 Get Log Page (02h): Supported 00:11:23.534 Delete I/O Completion Queue (04h): Supported 00:11:23.534 Create I/O Completion Queue (05h): Supported 00:11:23.534 Identify (06h): Supported 00:11:23.534 Abort (08h): Supported 00:11:23.534 Set Features (09h): Supported 00:11:23.534 Get Features (0Ah): Supported 00:11:23.534 Asynchronous Event Request (0Ch): Supported 00:11:23.534 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:23.534 Directive Send (19h): Supported 00:11:23.534 Directive Receive (1Ah): Supported 00:11:23.534 Virtualization Management (1Ch): Supported 00:11:23.534 Doorbell Buffer Config (7Ch): Supported 00:11:23.534 Format NVM (80h): Supported LBA-Change 00:11:23.534 I/O Commands 00:11:23.534 ------------ 00:11:23.534 Flush (00h): Supported LBA-Change 00:11:23.534 Write (01h): Supported LBA-Change 00:11:23.534 Read (02h): Supported 00:11:23.534 Compare (05h): Supported 00:11:23.534 Write Zeroes (08h): Supported LBA-Change 00:11:23.534 Dataset Management (09h): Supported LBA-Change 00:11:23.534 Unknown (0Ch): Supported 00:11:23.534 Unknown (12h): Supported 00:11:23.534 Copy (19h): Supported LBA-Change 00:11:23.534 Unknown (1Dh): Supported LBA-Change 00:11:23.534 00:11:23.534 Error Log 00:11:23.534 ========= 00:11:23.534 00:11:23.534 Arbitration 00:11:23.534 =========== 00:11:23.534 Arbitration Burst: no limit 00:11:23.534 00:11:23.534 Power Management 00:11:23.534 ================ 00:11:23.534 Number of Power States: 1 00:11:23.534 Current Power State: Power State #0 00:11:23.534 Power State #0: 00:11:23.534 Max Power: 25.00 W 00:11:23.534 Non-Operational State: Operational 00:11:23.534 Entry Latency: 16 microseconds 00:11:23.534 Exit Latency: 4 microseconds 00:11:23.534 Relative Read Throughput: 0 00:11:23.534 Relative Read Latency: 0 00:11:23.534 Relative Write Throughput: 0 00:11:23.534 Relative Write Latency: 0 00:11:23.534 Idle Power: Not Reported 00:11:23.534 Active Power: Not Reported 00:11:23.534 Non-Operational Permissive Mode: Not Supported 00:11:23.534 00:11:23.534 Health Information 00:11:23.534 ================== 00:11:23.534 Critical Warnings: 00:11:23.534 Available Spare Space: OK 00:11:23.534 Temperature: OK 00:11:23.534 Device Reliability: OK 00:11:23.534 Read Only: No 00:11:23.534 Volatile Memory Backup: OK 00:11:23.534 Current Temperature: 323 Kelvin (50 Celsius) 00:11:23.535 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:23.535 Available Spare: 0% 00:11:23.535 Available Spare Threshold: 0% 00:11:23.535 Life Percentage Used: 0% 00:11:23.535 Data Units Read: 810 00:11:23.535 Data Units Written: 661 00:11:23.535 Host Read Commands: 35727 00:11:23.535 Host Write Commands: 33474 00:11:23.535 Controller Busy Time: 0 minutes 00:11:23.535 Power Cycles: 0 00:11:23.535 Power On Hours: 0 hours 00:11:23.535 Unsafe Shutdowns: 0 00:11:23.535 Unrecoverable Media Errors: 0 00:11:23.535 Lifetime Error Log Entries: 0 00:11:23.535 Warning Temperature Time: 0 minutes 00:11:23.535 Critical Temperature Time: 0 minutes 00:11:23.535 00:11:23.535 Number of Queues 00:11:23.535 ================ 00:11:23.535 Number of I/O Submission Queues: 64 00:11:23.535 Number of I/O Completion Queues: 64 00:11:23.535 00:11:23.535 ZNS Specific Controller Data 00:11:23.535 ============================ 00:11:23.535 Zone Append Size Limit: 0 00:11:23.535 00:11:23.535 00:11:23.535 Active Namespaces 00:11:23.535 ================= 00:11:23.535 Namespace ID:1 00:11:23.535 Error Recovery Timeout: Unlimited 00:11:23.535 Command Set Identifier: NVM (00h) 00:11:23.535 Deallocate: Supported 00:11:23.535 Deallocated/Unwritten Error: Supported 00:11:23.535 Deallocated Read Value: All 0x00 00:11:23.535 Deallocate in Write Zeroes: Not Supported 00:11:23.535 Deallocated Guard Field: 0xFFFF 00:11:23.535 Flush: Supported 00:11:23.535 Reservation: Not Supported 00:11:23.535 Namespace Sharing Capabilities: Private 00:11:23.535 Size (in LBAs): 1310720 (5GiB) 00:11:23.535 Capacity (in LBAs): 1310720 (5GiB) 00:11:23.535 Utilization (in LBAs): 1310720 (5GiB) 00:11:23.535 Thin Provisioning: Not Supported 00:11:23.535 Per-NS Atomic Units: No 00:11:23.535 Maximum Single Source Range Length: 128 00:11:23.535 Maximum Copy Length: 128 00:11:23.535 Maximum Source Range Count: 128 00:11:23.535 NGUID/EUI64 Never Reused: No 00:11:23.535 Namespace Write Protected: No 00:11:23.535 Number of LBA Formats: 8 00:11:23.535 Current LBA Format: LBA Format #04 00:11:23.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:23.535 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:23.535 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:23.535 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:23.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:23.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:23.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:23.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:23.535 00:11:23.535 NVM Specific Namespace Data 00:11:23.535 =========================== 00:11:23.535 Logical Block Storage Tag Mask: 0 00:11:23.535 Protection Information Capabilities: 00:11:23.535 16b Guard Protection Information Storage Tag Support: No 00:11:23.535 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:23.535 Storage Tag Check Read Support: No 00:11:23.535 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.535 19:33:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:23.535 19:33:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:23.794 ===================================================== 00:11:23.794 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:23.794 ===================================================== 00:11:23.794 Controller Capabilities/Features 00:11:23.794 ================================ 00:11:23.794 Vendor ID: 1b36 00:11:23.794 Subsystem Vendor ID: 1af4 00:11:23.794 Serial Number: 12342 00:11:23.794 Model Number: QEMU NVMe Ctrl 00:11:23.794 Firmware Version: 8.0.0 00:11:23.794 Recommended Arb Burst: 6 00:11:23.794 IEEE OUI Identifier: 00 54 52 00:11:23.794 Multi-path I/O 00:11:23.794 May have multiple subsystem ports: No 00:11:23.794 May have multiple controllers: No 00:11:23.794 Associated with SR-IOV VF: No 00:11:23.794 Max Data Transfer Size: 524288 00:11:23.794 Max Number of Namespaces: 256 00:11:23.794 Max Number of I/O Queues: 64 00:11:23.794 NVMe Specification Version (VS): 1.4 00:11:23.794 NVMe Specification Version (Identify): 1.4 00:11:23.794 Maximum Queue Entries: 2048 00:11:23.794 Contiguous Queues Required: Yes 00:11:23.794 Arbitration Mechanisms Supported 00:11:23.794 Weighted Round Robin: Not Supported 00:11:23.794 Vendor Specific: Not Supported 00:11:23.794 Reset Timeout: 7500 ms 00:11:23.794 Doorbell Stride: 4 bytes 00:11:23.794 NVM Subsystem Reset: Not Supported 00:11:23.794 Command Sets Supported 00:11:23.794 NVM Command Set: Supported 00:11:23.794 Boot Partition: Not Supported 00:11:23.794 Memory Page Size Minimum: 4096 bytes 00:11:23.794 Memory Page Size Maximum: 65536 bytes 00:11:23.794 Persistent Memory Region: Not Supported 00:11:23.794 Optional Asynchronous Events Supported 00:11:23.794 Namespace Attribute Notices: Supported 00:11:23.794 Firmware Activation Notices: Not Supported 00:11:23.794 ANA Change Notices: Not Supported 00:11:23.794 PLE Aggregate Log Change Notices: Not Supported 00:11:23.794 LBA Status Info Alert Notices: Not Supported 00:11:23.794 EGE Aggregate Log Change Notices: Not Supported 00:11:23.794 Normal NVM Subsystem Shutdown event: Not Supported 00:11:23.794 Zone Descriptor Change Notices: Not Supported 00:11:23.794 Discovery Log Change Notices: Not Supported 00:11:23.794 Controller Attributes 00:11:23.794 128-bit Host Identifier: Not Supported 00:11:23.794 Non-Operational Permissive Mode: Not Supported 00:11:23.794 NVM Sets: Not Supported 00:11:23.794 Read Recovery Levels: Not Supported 00:11:23.794 Endurance Groups: Not Supported 00:11:23.794 Predictable Latency Mode: Not Supported 00:11:23.794 Traffic Based Keep ALive: Not Supported 00:11:23.794 Namespace Granularity: Not Supported 00:11:23.794 SQ Associations: Not Supported 00:11:23.794 UUID List: Not Supported 00:11:23.794 Multi-Domain Subsystem: Not Supported 00:11:23.794 Fixed Capacity Management: Not Supported 00:11:23.794 Variable Capacity Management: Not Supported 00:11:23.794 Delete Endurance Group: Not Supported 00:11:23.794 Delete NVM Set: Not Supported 00:11:23.794 Extended LBA Formats Supported: Supported 00:11:23.794 Flexible Data Placement Supported: Not Supported 00:11:23.794 00:11:23.794 Controller Memory Buffer Support 00:11:23.794 ================================ 00:11:23.794 Supported: No 00:11:23.794 00:11:23.794 Persistent Memory Region Support 00:11:23.794 ================================ 00:11:23.794 Supported: No 00:11:23.794 00:11:23.794 Admin Command Set Attributes 00:11:23.794 ============================ 00:11:23.794 Security Send/Receive: Not Supported 00:11:23.794 Format NVM: Supported 00:11:23.794 Firmware Activate/Download: Not Supported 00:11:23.794 Namespace Management: Supported 00:11:23.794 Device Self-Test: Not Supported 00:11:23.794 Directives: Supported 00:11:23.794 NVMe-MI: Not Supported 00:11:23.794 Virtualization Management: Not Supported 00:11:23.794 Doorbell Buffer Config: Supported 00:11:23.794 Get LBA Status Capability: Not Supported 00:11:23.794 Command & Feature Lockdown Capability: Not Supported 00:11:23.795 Abort Command Limit: 4 00:11:23.795 Async Event Request Limit: 4 00:11:23.795 Number of Firmware Slots: N/A 00:11:23.795 Firmware Slot 1 Read-Only: N/A 00:11:23.795 Firmware Activation Without Reset: N/A 00:11:23.795 Multiple Update Detection Support: N/A 00:11:23.795 Firmware Update Granularity: No Information Provided 00:11:23.795 Per-Namespace SMART Log: Yes 00:11:23.795 Asymmetric Namespace Access Log Page: Not Supported 00:11:23.795 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:23.795 Command Effects Log Page: Supported 00:11:23.795 Get Log Page Extended Data: Supported 00:11:23.795 Telemetry Log Pages: Not Supported 00:11:23.795 Persistent Event Log Pages: Not Supported 00:11:23.795 Supported Log Pages Log Page: May Support 00:11:23.795 Commands Supported & Effects Log Page: Not Supported 00:11:23.795 Feature Identifiers & Effects Log Page:May Support 00:11:23.795 NVMe-MI Commands & Effects Log Page: May Support 00:11:23.795 Data Area 4 for Telemetry Log: Not Supported 00:11:23.795 Error Log Page Entries Supported: 1 00:11:23.795 Keep Alive: Not Supported 00:11:23.795 00:11:23.795 NVM Command Set Attributes 00:11:23.795 ========================== 00:11:23.795 Submission Queue Entry Size 00:11:23.795 Max: 64 00:11:23.795 Min: 64 00:11:23.795 Completion Queue Entry Size 00:11:23.795 Max: 16 00:11:23.795 Min: 16 00:11:23.795 Number of Namespaces: 256 00:11:23.795 Compare Command: Supported 00:11:23.795 Write Uncorrectable Command: Not Supported 00:11:23.795 Dataset Management Command: Supported 00:11:23.795 Write Zeroes Command: Supported 00:11:23.795 Set Features Save Field: Supported 00:11:23.795 Reservations: Not Supported 00:11:23.795 Timestamp: Supported 00:11:23.795 Copy: Supported 00:11:23.795 Volatile Write Cache: Present 00:11:23.795 Atomic Write Unit (Normal): 1 00:11:23.795 Atomic Write Unit (PFail): 1 00:11:23.795 Atomic Compare & Write Unit: 1 00:11:23.795 Fused Compare & Write: Not Supported 00:11:23.795 Scatter-Gather List 00:11:23.795 SGL Command Set: Supported 00:11:23.795 SGL Keyed: Not Supported 00:11:23.795 SGL Bit Bucket Descriptor: Not Supported 00:11:23.795 SGL Metadata Pointer: Not Supported 00:11:23.795 Oversized SGL: Not Supported 00:11:23.795 SGL Metadata Address: Not Supported 00:11:23.795 SGL Offset: Not Supported 00:11:23.795 Transport SGL Data Block: Not Supported 00:11:23.795 Replay Protected Memory Block: Not Supported 00:11:23.795 00:11:23.795 Firmware Slot Information 00:11:23.795 ========================= 00:11:23.795 Active slot: 1 00:11:23.795 Slot 1 Firmware Revision: 1.0 00:11:23.795 00:11:23.795 00:11:23.795 Commands Supported and Effects 00:11:23.795 ============================== 00:11:23.795 Admin Commands 00:11:23.795 -------------- 00:11:23.795 Delete I/O Submission Queue (00h): Supported 00:11:23.795 Create I/O Submission Queue (01h): Supported 00:11:23.795 Get Log Page (02h): Supported 00:11:23.795 Delete I/O Completion Queue (04h): Supported 00:11:23.795 Create I/O Completion Queue (05h): Supported 00:11:23.795 Identify (06h): Supported 00:11:23.795 Abort (08h): Supported 00:11:23.795 Set Features (09h): Supported 00:11:23.795 Get Features (0Ah): Supported 00:11:23.795 Asynchronous Event Request (0Ch): Supported 00:11:23.795 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:23.795 Directive Send (19h): Supported 00:11:23.795 Directive Receive (1Ah): Supported 00:11:23.795 Virtualization Management (1Ch): Supported 00:11:23.795 Doorbell Buffer Config (7Ch): Supported 00:11:23.795 Format NVM (80h): Supported LBA-Change 00:11:23.795 I/O Commands 00:11:23.795 ------------ 00:11:23.795 Flush (00h): Supported LBA-Change 00:11:23.795 Write (01h): Supported LBA-Change 00:11:23.795 Read (02h): Supported 00:11:23.795 Compare (05h): Supported 00:11:23.795 Write Zeroes (08h): Supported LBA-Change 00:11:23.795 Dataset Management (09h): Supported LBA-Change 00:11:23.795 Unknown (0Ch): Supported 00:11:23.795 Unknown (12h): Supported 00:11:23.795 Copy (19h): Supported LBA-Change 00:11:23.795 Unknown (1Dh): Supported LBA-Change 00:11:23.795 00:11:23.795 Error Log 00:11:23.795 ========= 00:11:23.795 00:11:23.795 Arbitration 00:11:23.795 =========== 00:11:23.795 Arbitration Burst: no limit 00:11:23.795 00:11:23.795 Power Management 00:11:23.795 ================ 00:11:23.795 Number of Power States: 1 00:11:23.795 Current Power State: Power State #0 00:11:23.795 Power State #0: 00:11:23.795 Max Power: 25.00 W 00:11:23.795 Non-Operational State: Operational 00:11:23.795 Entry Latency: 16 microseconds 00:11:23.795 Exit Latency: 4 microseconds 00:11:23.795 Relative Read Throughput: 0 00:11:23.795 Relative Read Latency: 0 00:11:23.795 Relative Write Throughput: 0 00:11:23.795 Relative Write Latency: 0 00:11:23.795 Idle Power: Not Reported 00:11:23.795 Active Power: Not Reported 00:11:23.795 Non-Operational Permissive Mode: Not Supported 00:11:23.795 00:11:23.795 Health Information 00:11:23.795 ================== 00:11:23.795 Critical Warnings: 00:11:23.795 Available Spare Space: OK 00:11:23.795 Temperature: OK 00:11:23.795 Device Reliability: OK 00:11:23.795 Read Only: No 00:11:23.795 Volatile Memory Backup: OK 00:11:23.795 Current Temperature: 323 Kelvin (50 Celsius) 00:11:23.795 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:23.795 Available Spare: 0% 00:11:23.795 Available Spare Threshold: 0% 00:11:23.795 Life Percentage Used: 0% 00:11:23.795 Data Units Read: 2282 00:11:23.795 Data Units Written: 1962 00:11:23.795 Host Read Commands: 104809 00:11:23.795 Host Write Commands: 100579 00:11:23.795 Controller Busy Time: 0 minutes 00:11:23.795 Power Cycles: 0 00:11:23.795 Power On Hours: 0 hours 00:11:23.795 Unsafe Shutdowns: 0 00:11:23.795 Unrecoverable Media Errors: 0 00:11:23.795 Lifetime Error Log Entries: 0 00:11:23.795 Warning Temperature Time: 0 minutes 00:11:23.795 Critical Temperature Time: 0 minutes 00:11:23.795 00:11:23.795 Number of Queues 00:11:23.795 ================ 00:11:23.795 Number of I/O Submission Queues: 64 00:11:23.795 Number of I/O Completion Queues: 64 00:11:23.795 00:11:23.795 ZNS Specific Controller Data 00:11:23.795 ============================ 00:11:23.795 Zone Append Size Limit: 0 00:11:23.795 00:11:23.795 00:11:23.795 Active Namespaces 00:11:23.795 ================= 00:11:23.795 Namespace ID:1 00:11:23.795 Error Recovery Timeout: Unlimited 00:11:23.795 Command Set Identifier: NVM (00h) 00:11:23.795 Deallocate: Supported 00:11:23.795 Deallocated/Unwritten Error: Supported 00:11:23.795 Deallocated Read Value: All 0x00 00:11:23.795 Deallocate in Write Zeroes: Not Supported 00:11:23.795 Deallocated Guard Field: 0xFFFF 00:11:23.795 Flush: Supported 00:11:23.795 Reservation: Not Supported 00:11:23.795 Namespace Sharing Capabilities: Private 00:11:23.795 Size (in LBAs): 1048576 (4GiB) 00:11:23.795 Capacity (in LBAs): 1048576 (4GiB) 00:11:23.795 Utilization (in LBAs): 1048576 (4GiB) 00:11:23.795 Thin Provisioning: Not Supported 00:11:23.795 Per-NS Atomic Units: No 00:11:23.795 Maximum Single Source Range Length: 128 00:11:23.795 Maximum Copy Length: 128 00:11:23.795 Maximum Source Range Count: 128 00:11:23.795 NGUID/EUI64 Never Reused: No 00:11:23.795 Namespace Write Protected: No 00:11:23.795 Number of LBA Formats: 8 00:11:23.795 Current LBA Format: LBA Format #04 00:11:23.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:23.795 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:23.795 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:23.795 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:23.795 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:23.795 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:23.795 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:23.795 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:23.795 00:11:23.795 NVM Specific Namespace Data 00:11:23.795 =========================== 00:11:23.795 Logical Block Storage Tag Mask: 0 00:11:23.795 Protection Information Capabilities: 00:11:23.795 16b Guard Protection Information Storage Tag Support: No 00:11:23.795 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:23.795 Storage Tag Check Read Support: No 00:11:23.795 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.795 Namespace ID:2 00:11:23.795 Error Recovery Timeout: Unlimited 00:11:23.795 Command Set Identifier: NVM (00h) 00:11:23.795 Deallocate: Supported 00:11:23.795 Deallocated/Unwritten Error: Supported 00:11:23.795 Deallocated Read Value: All 0x00 00:11:23.795 Deallocate in Write Zeroes: Not Supported 00:11:23.795 Deallocated Guard Field: 0xFFFF 00:11:23.795 Flush: Supported 00:11:23.795 Reservation: Not Supported 00:11:23.795 Namespace Sharing Capabilities: Private 00:11:23.795 Size (in LBAs): 1048576 (4GiB) 00:11:23.796 Capacity (in LBAs): 1048576 (4GiB) 00:11:23.796 Utilization (in LBAs): 1048576 (4GiB) 00:11:23.796 Thin Provisioning: Not Supported 00:11:23.796 Per-NS Atomic Units: No 00:11:23.796 Maximum Single Source Range Length: 128 00:11:23.796 Maximum Copy Length: 128 00:11:23.796 Maximum Source Range Count: 128 00:11:23.796 NGUID/EUI64 Never Reused: No 00:11:23.796 Namespace Write Protected: No 00:11:23.796 Number of LBA Formats: 8 00:11:23.796 Current LBA Format: LBA Format #04 00:11:23.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:23.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:23.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:23.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:23.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:23.796 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:23.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:23.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:23.796 00:11:23.796 NVM Specific Namespace Data 00:11:23.796 =========================== 00:11:23.796 Logical Block Storage Tag Mask: 0 00:11:23.796 Protection Information Capabilities: 00:11:23.796 16b Guard Protection Information Storage Tag Support: No 00:11:23.796 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:23.796 Storage Tag Check Read Support: No 00:11:23.796 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Namespace ID:3 00:11:23.796 Error Recovery Timeout: Unlimited 00:11:23.796 Command Set Identifier: NVM (00h) 00:11:23.796 Deallocate: Supported 00:11:23.796 Deallocated/Unwritten Error: Supported 00:11:23.796 Deallocated Read Value: All 0x00 00:11:23.796 Deallocate in Write Zeroes: Not Supported 00:11:23.796 Deallocated Guard Field: 0xFFFF 00:11:23.796 Flush: Supported 00:11:23.796 Reservation: Not Supported 00:11:23.796 Namespace Sharing Capabilities: Private 00:11:23.796 Size (in LBAs): 1048576 (4GiB) 00:11:23.796 Capacity (in LBAs): 1048576 (4GiB) 00:11:23.796 Utilization (in LBAs): 1048576 (4GiB) 00:11:23.796 Thin Provisioning: Not Supported 00:11:23.796 Per-NS Atomic Units: No 00:11:23.796 Maximum Single Source Range Length: 128 00:11:23.796 Maximum Copy Length: 128 00:11:23.796 Maximum Source Range Count: 128 00:11:23.796 NGUID/EUI64 Never Reused: No 00:11:23.796 Namespace Write Protected: No 00:11:23.796 Number of LBA Formats: 8 00:11:23.796 Current LBA Format: LBA Format #04 00:11:23.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:23.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:23.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:23.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:23.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:23.796 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:23.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:23.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:23.796 00:11:23.796 NVM Specific Namespace Data 00:11:23.796 =========================== 00:11:23.796 Logical Block Storage Tag Mask: 0 00:11:23.796 Protection Information Capabilities: 00:11:23.796 16b Guard Protection Information Storage Tag Support: No 00:11:23.796 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:23.796 Storage Tag Check Read Support: No 00:11:23.796 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:23.796 19:33:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:23.796 19:33:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:24.054 ===================================================== 00:11:24.054 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:24.054 ===================================================== 00:11:24.054 Controller Capabilities/Features 00:11:24.054 ================================ 00:11:24.054 Vendor ID: 1b36 00:11:24.054 Subsystem Vendor ID: 1af4 00:11:24.054 Serial Number: 12343 00:11:24.054 Model Number: QEMU NVMe Ctrl 00:11:24.054 Firmware Version: 8.0.0 00:11:24.054 Recommended Arb Burst: 6 00:11:24.054 IEEE OUI Identifier: 00 54 52 00:11:24.054 Multi-path I/O 00:11:24.054 May have multiple subsystem ports: No 00:11:24.054 May have multiple controllers: Yes 00:11:24.054 Associated with SR-IOV VF: No 00:11:24.054 Max Data Transfer Size: 524288 00:11:24.054 Max Number of Namespaces: 256 00:11:24.054 Max Number of I/O Queues: 64 00:11:24.054 NVMe Specification Version (VS): 1.4 00:11:24.054 NVMe Specification Version (Identify): 1.4 00:11:24.054 Maximum Queue Entries: 2048 00:11:24.054 Contiguous Queues Required: Yes 00:11:24.054 Arbitration Mechanisms Supported 00:11:24.054 Weighted Round Robin: Not Supported 00:11:24.054 Vendor Specific: Not Supported 00:11:24.054 Reset Timeout: 7500 ms 00:11:24.054 Doorbell Stride: 4 bytes 00:11:24.054 NVM Subsystem Reset: Not Supported 00:11:24.054 Command Sets Supported 00:11:24.054 NVM Command Set: Supported 00:11:24.054 Boot Partition: Not Supported 00:11:24.054 Memory Page Size Minimum: 4096 bytes 00:11:24.054 Memory Page Size Maximum: 65536 bytes 00:11:24.054 Persistent Memory Region: Not Supported 00:11:24.054 Optional Asynchronous Events Supported 00:11:24.054 Namespace Attribute Notices: Supported 00:11:24.054 Firmware Activation Notices: Not Supported 00:11:24.054 ANA Change Notices: Not Supported 00:11:24.054 PLE Aggregate Log Change Notices: Not Supported 00:11:24.054 LBA Status Info Alert Notices: Not Supported 00:11:24.054 EGE Aggregate Log Change Notices: Not Supported 00:11:24.054 Normal NVM Subsystem Shutdown event: Not Supported 00:11:24.054 Zone Descriptor Change Notices: Not Supported 00:11:24.054 Discovery Log Change Notices: Not Supported 00:11:24.054 Controller Attributes 00:11:24.054 128-bit Host Identifier: Not Supported 00:11:24.054 Non-Operational Permissive Mode: Not Supported 00:11:24.054 NVM Sets: Not Supported 00:11:24.054 Read Recovery Levels: Not Supported 00:11:24.054 Endurance Groups: Supported 00:11:24.054 Predictable Latency Mode: Not Supported 00:11:24.054 Traffic Based Keep ALive: Not Supported 00:11:24.054 Namespace Granularity: Not Supported 00:11:24.054 SQ Associations: Not Supported 00:11:24.054 UUID List: Not Supported 00:11:24.054 Multi-Domain Subsystem: Not Supported 00:11:24.054 Fixed Capacity Management: Not Supported 00:11:24.054 Variable Capacity Management: Not Supported 00:11:24.054 Delete Endurance Group: Not Supported 00:11:24.054 Delete NVM Set: Not Supported 00:11:24.054 Extended LBA Formats Supported: Supported 00:11:24.054 Flexible Data Placement Supported: Supported 00:11:24.054 00:11:24.054 Controller Memory Buffer Support 00:11:24.054 ================================ 00:11:24.054 Supported: No 00:11:24.054 00:11:24.054 Persistent Memory Region Support 00:11:24.054 ================================ 00:11:24.054 Supported: No 00:11:24.054 00:11:24.054 Admin Command Set Attributes 00:11:24.054 ============================ 00:11:24.054 Security Send/Receive: Not Supported 00:11:24.054 Format NVM: Supported 00:11:24.054 Firmware Activate/Download: Not Supported 00:11:24.054 Namespace Management: Supported 00:11:24.054 Device Self-Test: Not Supported 00:11:24.055 Directives: Supported 00:11:24.055 NVMe-MI: Not Supported 00:11:24.055 Virtualization Management: Not Supported 00:11:24.055 Doorbell Buffer Config: Supported 00:11:24.055 Get LBA Status Capability: Not Supported 00:11:24.055 Command & Feature Lockdown Capability: Not Supported 00:11:24.055 Abort Command Limit: 4 00:11:24.055 Async Event Request Limit: 4 00:11:24.055 Number of Firmware Slots: N/A 00:11:24.055 Firmware Slot 1 Read-Only: N/A 00:11:24.055 Firmware Activation Without Reset: N/A 00:11:24.055 Multiple Update Detection Support: N/A 00:11:24.055 Firmware Update Granularity: No Information Provided 00:11:24.055 Per-Namespace SMART Log: Yes 00:11:24.055 Asymmetric Namespace Access Log Page: Not Supported 00:11:24.055 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:24.055 Command Effects Log Page: Supported 00:11:24.055 Get Log Page Extended Data: Supported 00:11:24.055 Telemetry Log Pages: Not Supported 00:11:24.055 Persistent Event Log Pages: Not Supported 00:11:24.055 Supported Log Pages Log Page: May Support 00:11:24.055 Commands Supported & Effects Log Page: Not Supported 00:11:24.055 Feature Identifiers & Effects Log Page:May Support 00:11:24.055 NVMe-MI Commands & Effects Log Page: May Support 00:11:24.055 Data Area 4 for Telemetry Log: Not Supported 00:11:24.055 Error Log Page Entries Supported: 1 00:11:24.055 Keep Alive: Not Supported 00:11:24.055 00:11:24.055 NVM Command Set Attributes 00:11:24.055 ========================== 00:11:24.055 Submission Queue Entry Size 00:11:24.055 Max: 64 00:11:24.055 Min: 64 00:11:24.055 Completion Queue Entry Size 00:11:24.055 Max: 16 00:11:24.055 Min: 16 00:11:24.055 Number of Namespaces: 256 00:11:24.055 Compare Command: Supported 00:11:24.055 Write Uncorrectable Command: Not Supported 00:11:24.055 Dataset Management Command: Supported 00:11:24.055 Write Zeroes Command: Supported 00:11:24.055 Set Features Save Field: Supported 00:11:24.055 Reservations: Not Supported 00:11:24.055 Timestamp: Supported 00:11:24.055 Copy: Supported 00:11:24.055 Volatile Write Cache: Present 00:11:24.055 Atomic Write Unit (Normal): 1 00:11:24.055 Atomic Write Unit (PFail): 1 00:11:24.055 Atomic Compare & Write Unit: 1 00:11:24.055 Fused Compare & Write: Not Supported 00:11:24.055 Scatter-Gather List 00:11:24.055 SGL Command Set: Supported 00:11:24.055 SGL Keyed: Not Supported 00:11:24.055 SGL Bit Bucket Descriptor: Not Supported 00:11:24.055 SGL Metadata Pointer: Not Supported 00:11:24.055 Oversized SGL: Not Supported 00:11:24.055 SGL Metadata Address: Not Supported 00:11:24.055 SGL Offset: Not Supported 00:11:24.055 Transport SGL Data Block: Not Supported 00:11:24.055 Replay Protected Memory Block: Not Supported 00:11:24.055 00:11:24.055 Firmware Slot Information 00:11:24.055 ========================= 00:11:24.055 Active slot: 1 00:11:24.055 Slot 1 Firmware Revision: 1.0 00:11:24.055 00:11:24.055 00:11:24.055 Commands Supported and Effects 00:11:24.055 ============================== 00:11:24.055 Admin Commands 00:11:24.055 -------------- 00:11:24.055 Delete I/O Submission Queue (00h): Supported 00:11:24.055 Create I/O Submission Queue (01h): Supported 00:11:24.055 Get Log Page (02h): Supported 00:11:24.055 Delete I/O Completion Queue (04h): Supported 00:11:24.055 Create I/O Completion Queue (05h): Supported 00:11:24.055 Identify (06h): Supported 00:11:24.055 Abort (08h): Supported 00:11:24.055 Set Features (09h): Supported 00:11:24.055 Get Features (0Ah): Supported 00:11:24.055 Asynchronous Event Request (0Ch): Supported 00:11:24.055 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:24.055 Directive Send (19h): Supported 00:11:24.055 Directive Receive (1Ah): Supported 00:11:24.055 Virtualization Management (1Ch): Supported 00:11:24.055 Doorbell Buffer Config (7Ch): Supported 00:11:24.055 Format NVM (80h): Supported LBA-Change 00:11:24.055 I/O Commands 00:11:24.055 ------------ 00:11:24.055 Flush (00h): Supported LBA-Change 00:11:24.055 Write (01h): Supported LBA-Change 00:11:24.055 Read (02h): Supported 00:11:24.055 Compare (05h): Supported 00:11:24.055 Write Zeroes (08h): Supported LBA-Change 00:11:24.055 Dataset Management (09h): Supported LBA-Change 00:11:24.055 Unknown (0Ch): Supported 00:11:24.055 Unknown (12h): Supported 00:11:24.055 Copy (19h): Supported LBA-Change 00:11:24.055 Unknown (1Dh): Supported LBA-Change 00:11:24.055 00:11:24.055 Error Log 00:11:24.055 ========= 00:11:24.055 00:11:24.055 Arbitration 00:11:24.055 =========== 00:11:24.055 Arbitration Burst: no limit 00:11:24.055 00:11:24.055 Power Management 00:11:24.055 ================ 00:11:24.055 Number of Power States: 1 00:11:24.055 Current Power State: Power State #0 00:11:24.055 Power State #0: 00:11:24.055 Max Power: 25.00 W 00:11:24.055 Non-Operational State: Operational 00:11:24.055 Entry Latency: 16 microseconds 00:11:24.055 Exit Latency: 4 microseconds 00:11:24.055 Relative Read Throughput: 0 00:11:24.055 Relative Read Latency: 0 00:11:24.055 Relative Write Throughput: 0 00:11:24.055 Relative Write Latency: 0 00:11:24.055 Idle Power: Not Reported 00:11:24.055 Active Power: Not Reported 00:11:24.055 Non-Operational Permissive Mode: Not Supported 00:11:24.055 00:11:24.055 Health Information 00:11:24.055 ================== 00:11:24.055 Critical Warnings: 00:11:24.055 Available Spare Space: OK 00:11:24.055 Temperature: OK 00:11:24.055 Device Reliability: OK 00:11:24.055 Read Only: No 00:11:24.055 Volatile Memory Backup: OK 00:11:24.055 Current Temperature: 323 Kelvin (50 Celsius) 00:11:24.055 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:24.055 Available Spare: 0% 00:11:24.055 Available Spare Threshold: 0% 00:11:24.055 Life Percentage Used: 0% 00:11:24.055 Data Units Read: 810 00:11:24.055 Data Units Written: 703 00:11:24.055 Host Read Commands: 35398 00:11:24.055 Host Write Commands: 33988 00:11:24.055 Controller Busy Time: 0 minutes 00:11:24.055 Power Cycles: 0 00:11:24.055 Power On Hours: 0 hours 00:11:24.055 Unsafe Shutdowns: 0 00:11:24.055 Unrecoverable Media Errors: 0 00:11:24.055 Lifetime Error Log Entries: 0 00:11:24.055 Warning Temperature Time: 0 minutes 00:11:24.055 Critical Temperature Time: 0 minutes 00:11:24.055 00:11:24.055 Number of Queues 00:11:24.055 ================ 00:11:24.055 Number of I/O Submission Queues: 64 00:11:24.055 Number of I/O Completion Queues: 64 00:11:24.055 00:11:24.055 ZNS Specific Controller Data 00:11:24.055 ============================ 00:11:24.055 Zone Append Size Limit: 0 00:11:24.055 00:11:24.055 00:11:24.055 Active Namespaces 00:11:24.055 ================= 00:11:24.055 Namespace ID:1 00:11:24.055 Error Recovery Timeout: Unlimited 00:11:24.055 Command Set Identifier: NVM (00h) 00:11:24.055 Deallocate: Supported 00:11:24.055 Deallocated/Unwritten Error: Supported 00:11:24.055 Deallocated Read Value: All 0x00 00:11:24.055 Deallocate in Write Zeroes: Not Supported 00:11:24.055 Deallocated Guard Field: 0xFFFF 00:11:24.055 Flush: Supported 00:11:24.055 Reservation: Not Supported 00:11:24.055 Namespace Sharing Capabilities: Multiple Controllers 00:11:24.055 Size (in LBAs): 262144 (1GiB) 00:11:24.055 Capacity (in LBAs): 262144 (1GiB) 00:11:24.055 Utilization (in LBAs): 262144 (1GiB) 00:11:24.055 Thin Provisioning: Not Supported 00:11:24.055 Per-NS Atomic Units: No 00:11:24.055 Maximum Single Source Range Length: 128 00:11:24.055 Maximum Copy Length: 128 00:11:24.055 Maximum Source Range Count: 128 00:11:24.055 NGUID/EUI64 Never Reused: No 00:11:24.055 Namespace Write Protected: No 00:11:24.055 Endurance group ID: 1 00:11:24.055 Number of LBA Formats: 8 00:11:24.055 Current LBA Format: LBA Format #04 00:11:24.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:24.055 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:24.055 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:24.055 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:24.055 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:24.055 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:24.055 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:24.055 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:24.055 00:11:24.055 Get Feature FDP: 00:11:24.055 ================ 00:11:24.055 Enabled: Yes 00:11:24.055 FDP configuration index: 0 00:11:24.055 00:11:24.055 FDP configurations log page 00:11:24.055 =========================== 00:11:24.055 Number of FDP configurations: 1 00:11:24.055 Version: 0 00:11:24.055 Size: 112 00:11:24.055 FDP Configuration Descriptor: 0 00:11:24.055 Descriptor Size: 96 00:11:24.055 Reclaim Group Identifier format: 2 00:11:24.055 FDP Volatile Write Cache: Not Present 00:11:24.055 FDP Configuration: Valid 00:11:24.055 Vendor Specific Size: 0 00:11:24.055 Number of Reclaim Groups: 2 00:11:24.055 Number of Recalim Unit Handles: 8 00:11:24.055 Max Placement Identifiers: 128 00:11:24.055 Number of Namespaces Suppprted: 256 00:11:24.055 Reclaim unit Nominal Size: 6000000 bytes 00:11:24.055 Estimated Reclaim Unit Time Limit: Not Reported 00:11:24.055 RUH Desc #000: RUH Type: Initially Isolated 00:11:24.055 RUH Desc #001: RUH Type: Initially Isolated 00:11:24.055 RUH Desc #002: RUH Type: Initially Isolated 00:11:24.056 RUH Desc #003: RUH Type: Initially Isolated 00:11:24.056 RUH Desc #004: RUH Type: Initially Isolated 00:11:24.056 RUH Desc #005: RUH Type: Initially Isolated 00:11:24.056 RUH Desc #006: RUH Type: Initially Isolated 00:11:24.056 RUH Desc #007: RUH Type: Initially Isolated 00:11:24.056 00:11:24.056 FDP reclaim unit handle usage log page 00:11:24.056 ====================================== 00:11:24.056 Number of Reclaim Unit Handles: 8 00:11:24.056 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:24.056 RUH Usage Desc #001: RUH Attributes: Unused 00:11:24.056 RUH Usage Desc #002: RUH Attributes: Unused 00:11:24.056 RUH Usage Desc #003: RUH Attributes: Unused 00:11:24.056 RUH Usage Desc #004: RUH Attributes: Unused 00:11:24.056 RUH Usage Desc #005: RUH Attributes: Unused 00:11:24.056 RUH Usage Desc #006: RUH Attributes: Unused 00:11:24.056 RUH Usage Desc #007: RUH Attributes: Unused 00:11:24.056 00:11:24.056 FDP statistics log page 00:11:24.056 ======================= 00:11:24.056 Host bytes with metadata written: 441032704 00:11:24.056 Media bytes with metadata written: 441106432 00:11:24.056 Media bytes erased: 0 00:11:24.056 00:11:24.056 FDP events log page 00:11:24.056 =================== 00:11:24.056 Number of FDP events: 0 00:11:24.056 00:11:24.056 NVM Specific Namespace Data 00:11:24.056 =========================== 00:11:24.056 Logical Block Storage Tag Mask: 0 00:11:24.056 Protection Information Capabilities: 00:11:24.056 16b Guard Protection Information Storage Tag Support: No 00:11:24.056 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:24.056 Storage Tag Check Read Support: No 00:11:24.056 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:24.056 00:11:24.056 real 0m1.908s 00:11:24.056 user 0m0.718s 00:11:24.056 sys 0m0.941s 00:11:24.056 19:33:14 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.314 19:33:14 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:24.314 ************************************ 00:11:24.314 END TEST nvme_identify 00:11:24.314 ************************************ 00:11:24.314 19:33:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:24.314 19:33:14 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:24.314 19:33:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:24.314 19:33:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.314 19:33:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:24.314 ************************************ 00:11:24.314 START TEST nvme_perf 00:11:24.314 ************************************ 00:11:24.314 19:33:14 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:11:24.314 19:33:14 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:25.688 Initializing NVMe Controllers 00:11:25.688 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:25.688 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:25.688 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:25.688 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:25.688 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:25.688 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:25.688 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:25.688 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:25.688 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:25.688 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:25.688 Initialization complete. Launching workers. 00:11:25.688 ======================================================== 00:11:25.688 Latency(us) 00:11:25.688 Device Information : IOPS MiB/s Average min max 00:11:25.688 PCIE (0000:00:10.0) NSID 1 from core 0: 10800.47 126.57 11883.32 9200.17 55809.92 00:11:25.688 PCIE (0000:00:11.0) NSID 1 from core 0: 10800.47 126.57 11849.79 9246.57 52073.43 00:11:25.688 PCIE (0000:00:13.0) NSID 1 from core 0: 10800.47 126.57 11813.59 9371.92 49072.84 00:11:25.688 PCIE (0000:00:12.0) NSID 1 from core 0: 10800.47 126.57 11776.74 9283.39 45418.49 00:11:25.688 PCIE (0000:00:12.0) NSID 2 from core 0: 10864.38 127.32 11669.83 9241.55 36252.39 00:11:25.688 PCIE (0000:00:12.0) NSID 3 from core 0: 10864.38 127.32 11632.44 9250.09 32408.79 00:11:25.688 ======================================================== 00:11:25.688 Total : 64930.63 760.91 11770.72 9200.17 55809.92 00:11:25.688 00:11:25.688 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:25.688 ================================================================================= 00:11:25.688 1.00000% : 9861.608us 00:11:25.688 10.00000% : 10360.930us 00:11:25.688 25.00000% : 10735.421us 00:11:25.688 50.00000% : 11234.743us 00:11:25.688 75.00000% : 11921.310us 00:11:25.688 90.00000% : 12857.539us 00:11:25.688 95.00000% : 13856.183us 00:11:25.688 98.00000% : 15541.394us 00:11:25.688 99.00000% : 46187.276us 00:11:25.688 99.50000% : 53427.444us 00:11:25.688 99.90000% : 55424.731us 00:11:25.688 99.99000% : 55924.053us 00:11:25.688 99.99900% : 55924.053us 00:11:25.688 99.99990% : 55924.053us 00:11:25.688 99.99999% : 55924.053us 00:11:25.688 00:11:25.689 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:25.689 ================================================================================= 00:11:25.689 1.00000% : 9861.608us 00:11:25.689 10.00000% : 10423.345us 00:11:25.689 25.00000% : 10797.836us 00:11:25.689 50.00000% : 11234.743us 00:11:25.689 75.00000% : 11921.310us 00:11:25.689 90.00000% : 12857.539us 00:11:25.689 95.00000% : 13856.183us 00:11:25.689 98.00000% : 15666.225us 00:11:25.689 99.00000% : 42692.023us 00:11:25.689 99.50000% : 49932.190us 00:11:25.689 99.90000% : 51679.817us 00:11:25.689 99.99000% : 52179.139us 00:11:25.689 99.99900% : 52179.139us 00:11:25.689 99.99990% : 52179.139us 00:11:25.689 99.99999% : 52179.139us 00:11:25.689 00:11:25.689 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:25.689 ================================================================================= 00:11:25.689 1.00000% : 9924.023us 00:11:25.689 10.00000% : 10423.345us 00:11:25.689 25.00000% : 10797.836us 00:11:25.689 50.00000% : 11234.743us 00:11:25.689 75.00000% : 11858.895us 00:11:25.689 90.00000% : 12795.124us 00:11:25.689 95.00000% : 14043.429us 00:11:25.689 98.00000% : 15478.979us 00:11:25.689 99.00000% : 39446.430us 00:11:25.689 99.50000% : 46686.598us 00:11:25.689 99.90000% : 48683.886us 00:11:25.689 99.99000% : 49183.208us 00:11:25.689 99.99900% : 49183.208us 00:11:25.689 99.99990% : 49183.208us 00:11:25.689 99.99999% : 49183.208us 00:11:25.689 00:11:25.689 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:25.689 ================================================================================= 00:11:25.689 1.00000% : 9924.023us 00:11:25.689 10.00000% : 10423.345us 00:11:25.689 25.00000% : 10797.836us 00:11:25.689 50.00000% : 11234.743us 00:11:25.689 75.00000% : 11921.310us 00:11:25.689 90.00000% : 12795.124us 00:11:25.689 95.00000% : 14105.844us 00:11:25.689 98.00000% : 15791.055us 00:11:25.689 99.00000% : 35701.516us 00:11:25.689 99.50000% : 43191.345us 00:11:25.689 99.90000% : 44938.971us 00:11:25.689 99.99000% : 45438.293us 00:11:25.689 99.99900% : 45438.293us 00:11:25.689 99.99990% : 45438.293us 00:11:25.689 99.99999% : 45438.293us 00:11:25.689 00:11:25.689 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:25.689 ================================================================================= 00:11:25.689 1.00000% : 9924.023us 00:11:25.689 10.00000% : 10423.345us 00:11:25.689 25.00000% : 10797.836us 00:11:25.689 50.00000% : 11234.743us 00:11:25.689 75.00000% : 11921.310us 00:11:25.689 90.00000% : 12795.124us 00:11:25.689 95.00000% : 14105.844us 00:11:25.689 98.00000% : 16227.962us 00:11:25.689 99.00000% : 26089.570us 00:11:25.689 99.50000% : 33953.890us 00:11:25.689 99.90000% : 35951.177us 00:11:25.689 99.99000% : 36450.499us 00:11:25.689 99.99900% : 36450.499us 00:11:25.689 99.99990% : 36450.499us 00:11:25.689 99.99999% : 36450.499us 00:11:25.689 00:11:25.689 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:25.689 ================================================================================= 00:11:25.689 1.00000% : 9924.023us 00:11:25.689 10.00000% : 10423.345us 00:11:25.689 25.00000% : 10797.836us 00:11:25.689 50.00000% : 11234.743us 00:11:25.689 75.00000% : 11921.310us 00:11:25.689 90.00000% : 12795.124us 00:11:25.689 95.00000% : 13981.013us 00:11:25.689 98.00000% : 16477.623us 00:11:25.689 99.00000% : 22469.486us 00:11:25.689 99.50000% : 29959.314us 00:11:25.689 99.90000% : 31956.602us 00:11:25.689 99.99000% : 32455.924us 00:11:25.689 99.99900% : 32455.924us 00:11:25.689 99.99990% : 32455.924us 00:11:25.689 99.99999% : 32455.924us 00:11:25.689 00:11:25.689 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:25.689 ============================================================================== 00:11:25.689 Range in us Cumulative IO count 00:11:25.689 9175.040 - 9237.455: 0.0185% ( 2) 00:11:25.689 9237.455 - 9299.870: 0.0925% ( 8) 00:11:25.689 9299.870 - 9362.286: 0.1572% ( 7) 00:11:25.689 9362.286 - 9424.701: 0.2311% ( 8) 00:11:25.689 9424.701 - 9487.116: 0.3236% ( 10) 00:11:25.689 9487.116 - 9549.531: 0.4253% ( 11) 00:11:25.689 9549.531 - 9611.947: 0.5270% ( 11) 00:11:25.689 9611.947 - 9674.362: 0.6564% ( 14) 00:11:25.689 9674.362 - 9736.777: 0.8229% ( 18) 00:11:25.689 9736.777 - 9799.192: 0.9708% ( 16) 00:11:25.689 9799.192 - 9861.608: 1.1464% ( 19) 00:11:25.689 9861.608 - 9924.023: 1.4608% ( 34) 00:11:25.689 9924.023 - 9986.438: 2.0340% ( 62) 00:11:25.689 9986.438 - 10048.853: 2.9308% ( 97) 00:11:25.689 10048.853 - 10111.269: 4.1513% ( 132) 00:11:25.689 10111.269 - 10173.684: 5.3994% ( 135) 00:11:25.689 10173.684 - 10236.099: 6.9434% ( 167) 00:11:25.689 10236.099 - 10298.514: 8.6631% ( 186) 00:11:25.689 10298.514 - 10360.930: 10.7064% ( 221) 00:11:25.689 10360.930 - 10423.345: 12.8976% ( 237) 00:11:25.689 10423.345 - 10485.760: 15.3569% ( 266) 00:11:25.689 10485.760 - 10548.175: 17.8994% ( 275) 00:11:25.689 10548.175 - 10610.590: 20.3772% ( 268) 00:11:25.689 10610.590 - 10673.006: 22.8920% ( 272) 00:11:25.689 10673.006 - 10735.421: 25.5640% ( 289) 00:11:25.689 10735.421 - 10797.836: 28.4209% ( 309) 00:11:25.689 10797.836 - 10860.251: 31.2130% ( 302) 00:11:25.689 10860.251 - 10922.667: 34.2733% ( 331) 00:11:25.689 10922.667 - 10985.082: 37.2041% ( 317) 00:11:25.689 10985.082 - 11047.497: 40.3754% ( 343) 00:11:25.689 11047.497 - 11109.912: 43.5558% ( 344) 00:11:25.689 11109.912 - 11172.328: 46.7271% ( 343) 00:11:25.689 11172.328 - 11234.743: 50.0832% ( 363) 00:11:25.689 11234.743 - 11297.158: 53.4393% ( 363) 00:11:25.689 11297.158 - 11359.573: 56.6753% ( 350) 00:11:25.689 11359.573 - 11421.989: 59.8188% ( 340) 00:11:25.689 11421.989 - 11484.404: 62.5277% ( 293) 00:11:25.689 11484.404 - 11546.819: 64.9871% ( 266) 00:11:25.689 11546.819 - 11609.234: 67.1043% ( 229) 00:11:25.689 11609.234 - 11671.650: 69.0551% ( 211) 00:11:25.689 11671.650 - 11734.065: 70.8487% ( 194) 00:11:25.689 11734.065 - 11796.480: 72.5129% ( 180) 00:11:25.689 11796.480 - 11858.895: 74.0385% ( 165) 00:11:25.689 11858.895 - 11921.310: 75.4715% ( 155) 00:11:25.689 11921.310 - 11983.726: 77.0248% ( 168) 00:11:25.689 11983.726 - 12046.141: 78.4393% ( 153) 00:11:25.689 12046.141 - 12108.556: 79.7800% ( 145) 00:11:25.689 12108.556 - 12170.971: 81.0004% ( 132) 00:11:25.689 12170.971 - 12233.387: 82.0636% ( 115) 00:11:25.689 12233.387 - 12295.802: 83.1361% ( 116) 00:11:25.689 12295.802 - 12358.217: 83.9497% ( 88) 00:11:25.689 12358.217 - 12420.632: 84.8095% ( 93) 00:11:25.689 12420.632 - 12483.048: 85.6139% ( 87) 00:11:25.689 12483.048 - 12545.463: 86.5292% ( 99) 00:11:25.689 12545.463 - 12607.878: 87.4075% ( 95) 00:11:25.689 12607.878 - 12670.293: 88.2859% ( 95) 00:11:25.689 12670.293 - 12732.709: 89.0625% ( 84) 00:11:25.689 12732.709 - 12795.124: 89.6912% ( 68) 00:11:25.689 12795.124 - 12857.539: 90.3569% ( 72) 00:11:25.689 12857.539 - 12919.954: 90.9024% ( 59) 00:11:25.689 12919.954 - 12982.370: 91.4571% ( 60) 00:11:25.689 12982.370 - 13044.785: 91.9564% ( 54) 00:11:25.689 13044.785 - 13107.200: 92.2984% ( 37) 00:11:25.689 13107.200 - 13169.615: 92.6683% ( 40) 00:11:25.689 13169.615 - 13232.030: 92.9456% ( 30) 00:11:25.689 13232.030 - 13294.446: 93.1953% ( 27) 00:11:25.689 13294.446 - 13356.861: 93.4357% ( 26) 00:11:25.689 13356.861 - 13419.276: 93.6945% ( 28) 00:11:25.689 13419.276 - 13481.691: 93.9534% ( 28) 00:11:25.689 13481.691 - 13544.107: 94.1753% ( 24) 00:11:25.689 13544.107 - 13606.522: 94.3879% ( 23) 00:11:25.689 13606.522 - 13668.937: 94.5729% ( 20) 00:11:25.689 13668.937 - 13731.352: 94.7670% ( 21) 00:11:25.689 13731.352 - 13793.768: 94.9704% ( 22) 00:11:25.689 13793.768 - 13856.183: 95.1461% ( 19) 00:11:25.689 13856.183 - 13918.598: 95.2848% ( 15) 00:11:25.689 13918.598 - 13981.013: 95.4419% ( 17) 00:11:25.689 13981.013 - 14043.429: 95.6268% ( 20) 00:11:25.689 14043.429 - 14105.844: 95.8118% ( 20) 00:11:25.689 14105.844 - 14168.259: 95.9874% ( 19) 00:11:25.689 14168.259 - 14230.674: 96.1076% ( 13) 00:11:25.689 14230.674 - 14293.090: 96.2463% ( 15) 00:11:25.689 14293.090 - 14355.505: 96.3388% ( 10) 00:11:25.689 14355.505 - 14417.920: 96.4589% ( 13) 00:11:25.689 14417.920 - 14480.335: 96.5791% ( 13) 00:11:25.689 14480.335 - 14542.750: 96.6531% ( 8) 00:11:25.689 14542.750 - 14605.166: 96.7456% ( 10) 00:11:25.689 14605.166 - 14667.581: 96.8380% ( 10) 00:11:25.689 14667.581 - 14729.996: 96.9397% ( 11) 00:11:25.689 14729.996 - 14792.411: 97.0229% ( 9) 00:11:25.689 14792.411 - 14854.827: 97.1246% ( 11) 00:11:25.689 14854.827 - 14917.242: 97.2263% ( 11) 00:11:25.689 14917.242 - 14979.657: 97.3188% ( 10) 00:11:25.689 14979.657 - 15042.072: 97.4020% ( 9) 00:11:25.689 15042.072 - 15104.488: 97.4945% ( 10) 00:11:25.689 15104.488 - 15166.903: 97.5592% ( 7) 00:11:25.689 15166.903 - 15229.318: 97.6424% ( 9) 00:11:25.689 15229.318 - 15291.733: 97.7163% ( 8) 00:11:25.689 15291.733 - 15354.149: 97.8180% ( 11) 00:11:25.689 15354.149 - 15416.564: 97.8920% ( 8) 00:11:25.689 15416.564 - 15478.979: 97.9845% ( 10) 00:11:25.689 15478.979 - 15541.394: 98.0862% ( 11) 00:11:25.689 15541.394 - 15603.810: 98.1879% ( 11) 00:11:25.689 15603.810 - 15666.225: 98.2803% ( 10) 00:11:25.689 15666.225 - 15728.640: 98.3358% ( 6) 00:11:25.689 15728.640 - 15791.055: 98.3913% ( 6) 00:11:25.689 15791.055 - 15853.470: 98.4283% ( 4) 00:11:25.689 15853.470 - 15915.886: 98.4560% ( 3) 00:11:25.689 15915.886 - 15978.301: 98.4745% ( 2) 00:11:25.689 15978.301 - 16103.131: 98.5300% ( 6) 00:11:25.689 16103.131 - 16227.962: 98.5762% ( 5) 00:11:25.689 16227.962 - 16352.792: 98.6409% ( 7) 00:11:25.689 16352.792 - 16477.623: 98.6871% ( 5) 00:11:25.689 16477.623 - 16602.453: 98.7426% ( 6) 00:11:25.689 16602.453 - 16727.284: 98.7888% ( 5) 00:11:25.689 16727.284 - 16852.114: 98.8166% ( 3) 00:11:25.689 44938.971 - 45188.632: 98.8443% ( 3) 00:11:25.689 45188.632 - 45438.293: 98.8905% ( 5) 00:11:25.689 45438.293 - 45687.954: 98.9368% ( 5) 00:11:25.689 45687.954 - 45937.615: 98.9830% ( 5) 00:11:25.689 45937.615 - 46187.276: 99.0292% ( 5) 00:11:25.690 46187.276 - 46436.937: 99.0754% ( 5) 00:11:25.690 46436.937 - 46686.598: 99.1217% ( 5) 00:11:25.690 46686.598 - 46936.259: 99.1587% ( 4) 00:11:25.690 46936.259 - 47185.920: 99.2234% ( 7) 00:11:25.690 47185.920 - 47435.581: 99.2604% ( 4) 00:11:25.690 47435.581 - 47685.242: 99.3066% ( 5) 00:11:25.690 47685.242 - 47934.903: 99.3621% ( 6) 00:11:25.690 47934.903 - 48184.564: 99.4083% ( 5) 00:11:25.690 52678.461 - 52928.122: 99.4268% ( 2) 00:11:25.690 52928.122 - 53177.783: 99.4730% ( 5) 00:11:25.690 53177.783 - 53427.444: 99.5285% ( 6) 00:11:25.690 53427.444 - 53677.105: 99.5655% ( 4) 00:11:25.690 53677.105 - 53926.766: 99.6209% ( 6) 00:11:25.690 53926.766 - 54176.427: 99.6672% ( 5) 00:11:25.690 54176.427 - 54426.088: 99.7134% ( 5) 00:11:25.690 54426.088 - 54675.749: 99.7596% ( 5) 00:11:25.690 54675.749 - 54925.410: 99.8151% ( 6) 00:11:25.690 54925.410 - 55175.070: 99.8706% ( 6) 00:11:25.690 55175.070 - 55424.731: 99.9168% ( 5) 00:11:25.690 55424.731 - 55674.392: 99.9723% ( 6) 00:11:25.690 55674.392 - 55924.053: 100.0000% ( 3) 00:11:25.690 00:11:25.690 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:25.690 ============================================================================== 00:11:25.690 Range in us Cumulative IO count 00:11:25.690 9237.455 - 9299.870: 0.0370% ( 4) 00:11:25.690 9299.870 - 9362.286: 0.0832% ( 5) 00:11:25.690 9362.286 - 9424.701: 0.1202% ( 4) 00:11:25.690 9424.701 - 9487.116: 0.1849% ( 7) 00:11:25.690 9487.116 - 9549.531: 0.2496% ( 7) 00:11:25.690 9549.531 - 9611.947: 0.3236% ( 8) 00:11:25.690 9611.947 - 9674.362: 0.4808% ( 17) 00:11:25.690 9674.362 - 9736.777: 0.6472% ( 18) 00:11:25.690 9736.777 - 9799.192: 0.8598% ( 23) 00:11:25.690 9799.192 - 9861.608: 1.1095% ( 27) 00:11:25.690 9861.608 - 9924.023: 1.3406% ( 25) 00:11:25.690 9924.023 - 9986.438: 1.6272% ( 31) 00:11:25.690 9986.438 - 10048.853: 1.9601% ( 36) 00:11:25.690 10048.853 - 10111.269: 2.7182% ( 82) 00:11:25.690 10111.269 - 10173.684: 3.6982% ( 106) 00:11:25.690 10173.684 - 10236.099: 5.0296% ( 144) 00:11:25.690 10236.099 - 10298.514: 6.5921% ( 169) 00:11:25.690 10298.514 - 10360.930: 8.4967% ( 206) 00:11:25.690 10360.930 - 10423.345: 10.6416% ( 232) 00:11:25.690 10423.345 - 10485.760: 13.0178% ( 257) 00:11:25.690 10485.760 - 10548.175: 15.6897% ( 289) 00:11:25.690 10548.175 - 10610.590: 18.5374% ( 308) 00:11:25.690 10610.590 - 10673.006: 21.5607% ( 327) 00:11:25.690 10673.006 - 10735.421: 24.5377% ( 322) 00:11:25.690 10735.421 - 10797.836: 27.6072% ( 332) 00:11:25.690 10797.836 - 10860.251: 30.6675% ( 331) 00:11:25.690 10860.251 - 10922.667: 33.9405% ( 354) 00:11:25.690 10922.667 - 10985.082: 37.2874% ( 362) 00:11:25.690 10985.082 - 11047.497: 40.7359% ( 373) 00:11:25.690 11047.497 - 11109.912: 44.4064% ( 397) 00:11:25.690 11109.912 - 11172.328: 48.0399% ( 393) 00:11:25.690 11172.328 - 11234.743: 51.6827% ( 394) 00:11:25.690 11234.743 - 11297.158: 55.2422% ( 385) 00:11:25.690 11297.158 - 11359.573: 58.4320% ( 345) 00:11:25.690 11359.573 - 11421.989: 61.3443% ( 315) 00:11:25.690 11421.989 - 11484.404: 63.8314% ( 269) 00:11:25.690 11484.404 - 11546.819: 65.8839% ( 222) 00:11:25.690 11546.819 - 11609.234: 67.7700% ( 204) 00:11:25.690 11609.234 - 11671.650: 69.5636% ( 194) 00:11:25.690 11671.650 - 11734.065: 71.3480% ( 193) 00:11:25.690 11734.065 - 11796.480: 73.0030% ( 179) 00:11:25.690 11796.480 - 11858.895: 74.6487% ( 178) 00:11:25.690 11858.895 - 11921.310: 76.2482% ( 173) 00:11:25.690 11921.310 - 11983.726: 77.6997% ( 157) 00:11:25.690 11983.726 - 12046.141: 79.1143% ( 153) 00:11:25.690 12046.141 - 12108.556: 80.3902% ( 138) 00:11:25.690 12108.556 - 12170.971: 81.5459% ( 125) 00:11:25.690 12170.971 - 12233.387: 82.7108% ( 126) 00:11:25.690 12233.387 - 12295.802: 83.7833% ( 116) 00:11:25.690 12295.802 - 12358.217: 84.7633% ( 106) 00:11:25.690 12358.217 - 12420.632: 85.6879% ( 100) 00:11:25.690 12420.632 - 12483.048: 86.5754% ( 96) 00:11:25.690 12483.048 - 12545.463: 87.3521% ( 84) 00:11:25.690 12545.463 - 12607.878: 88.0917% ( 80) 00:11:25.690 12607.878 - 12670.293: 88.7204% ( 68) 00:11:25.690 12670.293 - 12732.709: 89.2936% ( 62) 00:11:25.690 12732.709 - 12795.124: 89.8484% ( 60) 00:11:25.690 12795.124 - 12857.539: 90.3476% ( 54) 00:11:25.690 12857.539 - 12919.954: 90.8284% ( 52) 00:11:25.690 12919.954 - 12982.370: 91.3092% ( 52) 00:11:25.690 12982.370 - 13044.785: 91.6605% ( 38) 00:11:25.690 13044.785 - 13107.200: 91.9471% ( 31) 00:11:25.690 13107.200 - 13169.615: 92.2522% ( 33) 00:11:25.690 13169.615 - 13232.030: 92.5481% ( 32) 00:11:25.690 13232.030 - 13294.446: 92.8162% ( 29) 00:11:25.690 13294.446 - 13356.861: 93.1305% ( 34) 00:11:25.690 13356.861 - 13419.276: 93.3894% ( 28) 00:11:25.690 13419.276 - 13481.691: 93.6391% ( 27) 00:11:25.690 13481.691 - 13544.107: 93.8609% ( 24) 00:11:25.690 13544.107 - 13606.522: 94.0921% ( 25) 00:11:25.690 13606.522 - 13668.937: 94.3510% ( 28) 00:11:25.690 13668.937 - 13731.352: 94.5729% ( 24) 00:11:25.690 13731.352 - 13793.768: 94.8502% ( 30) 00:11:25.690 13793.768 - 13856.183: 95.0999% ( 27) 00:11:25.690 13856.183 - 13918.598: 95.3217% ( 24) 00:11:25.690 13918.598 - 13981.013: 95.5714% ( 27) 00:11:25.690 13981.013 - 14043.429: 95.7840% ( 23) 00:11:25.690 14043.429 - 14105.844: 95.9320% ( 16) 00:11:25.690 14105.844 - 14168.259: 96.0429% ( 12) 00:11:25.690 14168.259 - 14230.674: 96.0521% ( 1) 00:11:25.690 14230.674 - 14293.090: 96.0984% ( 5) 00:11:25.690 14293.090 - 14355.505: 96.1261% ( 3) 00:11:25.690 14355.505 - 14417.920: 96.1816% ( 6) 00:11:25.690 14417.920 - 14480.335: 96.2463% ( 7) 00:11:25.690 14480.335 - 14542.750: 96.3018% ( 6) 00:11:25.690 14542.750 - 14605.166: 96.3480% ( 5) 00:11:25.690 14605.166 - 14667.581: 96.4312% ( 9) 00:11:25.690 14667.581 - 14729.996: 96.4959% ( 7) 00:11:25.690 14729.996 - 14792.411: 96.5976% ( 11) 00:11:25.690 14792.411 - 14854.827: 96.6993% ( 11) 00:11:25.690 14854.827 - 14917.242: 96.8010% ( 11) 00:11:25.690 14917.242 - 14979.657: 96.9212% ( 13) 00:11:25.690 14979.657 - 15042.072: 97.0322% ( 12) 00:11:25.690 15042.072 - 15104.488: 97.1339% ( 11) 00:11:25.690 15104.488 - 15166.903: 97.2356% ( 11) 00:11:25.690 15166.903 - 15229.318: 97.3558% ( 13) 00:11:25.690 15229.318 - 15291.733: 97.4575% ( 11) 00:11:25.690 15291.733 - 15354.149: 97.5592% ( 11) 00:11:25.690 15354.149 - 15416.564: 97.6886% ( 14) 00:11:25.690 15416.564 - 15478.979: 97.7811% ( 10) 00:11:25.690 15478.979 - 15541.394: 97.8735% ( 10) 00:11:25.690 15541.394 - 15603.810: 97.9845% ( 12) 00:11:25.690 15603.810 - 15666.225: 98.0862% ( 11) 00:11:25.690 15666.225 - 15728.640: 98.1879% ( 11) 00:11:25.690 15728.640 - 15791.055: 98.2526% ( 7) 00:11:25.690 15791.055 - 15853.470: 98.2988% ( 5) 00:11:25.690 15853.470 - 15915.886: 98.3358% ( 4) 00:11:25.690 15915.886 - 15978.301: 98.3913% ( 6) 00:11:25.690 15978.301 - 16103.131: 98.4837% ( 10) 00:11:25.690 16103.131 - 16227.962: 98.5669% ( 9) 00:11:25.690 16227.962 - 16352.792: 98.6224% ( 6) 00:11:25.690 16352.792 - 16477.623: 98.6779% ( 6) 00:11:25.690 16477.623 - 16602.453: 98.7426% ( 7) 00:11:25.690 16602.453 - 16727.284: 98.8073% ( 7) 00:11:25.690 16727.284 - 16852.114: 98.8166% ( 1) 00:11:25.690 41693.379 - 41943.040: 98.8628% ( 5) 00:11:25.690 41943.040 - 42192.701: 98.9090% ( 5) 00:11:25.690 42192.701 - 42442.362: 98.9553% ( 5) 00:11:25.690 42442.362 - 42692.023: 99.0015% ( 5) 00:11:25.690 42692.023 - 42941.684: 99.0570% ( 6) 00:11:25.690 42941.684 - 43191.345: 99.1032% ( 5) 00:11:25.690 43191.345 - 43441.006: 99.1494% ( 5) 00:11:25.690 43441.006 - 43690.667: 99.2049% ( 6) 00:11:25.690 43690.667 - 43940.328: 99.2511% ( 5) 00:11:25.690 43940.328 - 44189.989: 99.2973% ( 5) 00:11:25.690 44189.989 - 44439.650: 99.3528% ( 6) 00:11:25.690 44439.650 - 44689.310: 99.3990% ( 5) 00:11:25.690 44689.310 - 44938.971: 99.4083% ( 1) 00:11:25.690 49183.208 - 49432.869: 99.4360% ( 3) 00:11:25.690 49432.869 - 49682.530: 99.4822% ( 5) 00:11:25.690 49682.530 - 49932.190: 99.5470% ( 7) 00:11:25.690 49932.190 - 50181.851: 99.5932% ( 5) 00:11:25.690 50181.851 - 50431.512: 99.6394% ( 5) 00:11:25.690 50431.512 - 50681.173: 99.6949% ( 6) 00:11:25.690 50681.173 - 50930.834: 99.7504% ( 6) 00:11:25.690 50930.834 - 51180.495: 99.7966% ( 5) 00:11:25.690 51180.495 - 51430.156: 99.8521% ( 6) 00:11:25.690 51430.156 - 51679.817: 99.9075% ( 6) 00:11:25.690 51679.817 - 51929.478: 99.9630% ( 6) 00:11:25.690 51929.478 - 52179.139: 100.0000% ( 4) 00:11:25.690 00:11:25.690 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:25.690 ============================================================================== 00:11:25.690 Range in us Cumulative IO count 00:11:25.690 9362.286 - 9424.701: 0.0740% ( 8) 00:11:25.690 9424.701 - 9487.116: 0.1479% ( 8) 00:11:25.690 9487.116 - 9549.531: 0.2404% ( 10) 00:11:25.690 9549.531 - 9611.947: 0.3328% ( 10) 00:11:25.690 9611.947 - 9674.362: 0.4808% ( 16) 00:11:25.690 9674.362 - 9736.777: 0.6472% ( 18) 00:11:25.690 9736.777 - 9799.192: 0.7951% ( 16) 00:11:25.690 9799.192 - 9861.608: 0.9523% ( 17) 00:11:25.690 9861.608 - 9924.023: 1.1372% ( 20) 00:11:25.690 9924.023 - 9986.438: 1.3591% ( 24) 00:11:25.690 9986.438 - 10048.853: 1.7104% ( 38) 00:11:25.690 10048.853 - 10111.269: 2.3114% ( 65) 00:11:25.690 10111.269 - 10173.684: 3.3007% ( 107) 00:11:25.690 10173.684 - 10236.099: 4.6875% ( 150) 00:11:25.690 10236.099 - 10298.514: 6.3425% ( 179) 00:11:25.690 10298.514 - 10360.930: 8.2933% ( 211) 00:11:25.690 10360.930 - 10423.345: 10.3828% ( 226) 00:11:25.690 10423.345 - 10485.760: 12.7404% ( 255) 00:11:25.690 10485.760 - 10548.175: 15.4678% ( 295) 00:11:25.690 10548.175 - 10610.590: 18.2970% ( 306) 00:11:25.690 10610.590 - 10673.006: 21.2463% ( 319) 00:11:25.690 10673.006 - 10735.421: 24.2326% ( 323) 00:11:25.690 10735.421 - 10797.836: 27.3391% ( 336) 00:11:25.690 10797.836 - 10860.251: 30.5566% ( 348) 00:11:25.690 10860.251 - 10922.667: 33.9405% ( 366) 00:11:25.690 10922.667 - 10985.082: 37.3336% ( 367) 00:11:25.690 10985.082 - 11047.497: 40.8376% ( 379) 00:11:25.690 11047.497 - 11109.912: 44.3510% ( 380) 00:11:25.690 11109.912 - 11172.328: 47.8920% ( 383) 00:11:25.690 11172.328 - 11234.743: 51.6087% ( 402) 00:11:25.690 11234.743 - 11297.158: 55.2422% ( 393) 00:11:25.690 11297.158 - 11359.573: 58.4412% ( 346) 00:11:25.691 11359.573 - 11421.989: 61.2796% ( 307) 00:11:25.691 11421.989 - 11484.404: 63.7851% ( 271) 00:11:25.691 11484.404 - 11546.819: 65.8469% ( 223) 00:11:25.691 11546.819 - 11609.234: 67.7515% ( 206) 00:11:25.691 11609.234 - 11671.650: 69.6930% ( 210) 00:11:25.691 11671.650 - 11734.065: 71.5607% ( 202) 00:11:25.691 11734.065 - 11796.480: 73.2711% ( 185) 00:11:25.691 11796.480 - 11858.895: 75.0185% ( 189) 00:11:25.691 11858.895 - 11921.310: 76.5163% ( 162) 00:11:25.691 11921.310 - 11983.726: 77.9771% ( 158) 00:11:25.691 11983.726 - 12046.141: 79.4564% ( 160) 00:11:25.691 12046.141 - 12108.556: 80.8987% ( 156) 00:11:25.691 12108.556 - 12170.971: 82.2947% ( 151) 00:11:25.691 12170.971 - 12233.387: 83.5152% ( 132) 00:11:25.691 12233.387 - 12295.802: 84.5784% ( 115) 00:11:25.691 12295.802 - 12358.217: 85.5492% ( 105) 00:11:25.691 12358.217 - 12420.632: 86.4368% ( 96) 00:11:25.691 12420.632 - 12483.048: 87.1857% ( 81) 00:11:25.691 12483.048 - 12545.463: 87.8421% ( 71) 00:11:25.691 12545.463 - 12607.878: 88.4893% ( 70) 00:11:25.691 12607.878 - 12670.293: 89.0717% ( 63) 00:11:25.691 12670.293 - 12732.709: 89.5987% ( 57) 00:11:25.691 12732.709 - 12795.124: 90.1165% ( 56) 00:11:25.691 12795.124 - 12857.539: 90.5325% ( 45) 00:11:25.691 12857.539 - 12919.954: 90.9393% ( 44) 00:11:25.691 12919.954 - 12982.370: 91.3277% ( 42) 00:11:25.691 12982.370 - 13044.785: 91.6513% ( 35) 00:11:25.691 13044.785 - 13107.200: 91.9656% ( 34) 00:11:25.691 13107.200 - 13169.615: 92.2060% ( 26) 00:11:25.691 13169.615 - 13232.030: 92.4464% ( 26) 00:11:25.691 13232.030 - 13294.446: 92.6683% ( 24) 00:11:25.691 13294.446 - 13356.861: 92.8347% ( 18) 00:11:25.691 13356.861 - 13419.276: 93.0011% ( 18) 00:11:25.691 13419.276 - 13481.691: 93.2415% ( 26) 00:11:25.691 13481.691 - 13544.107: 93.4172% ( 19) 00:11:25.691 13544.107 - 13606.522: 93.6298% ( 23) 00:11:25.691 13606.522 - 13668.937: 93.8332% ( 22) 00:11:25.691 13668.937 - 13731.352: 94.0736% ( 26) 00:11:25.691 13731.352 - 13793.768: 94.2955% ( 24) 00:11:25.691 13793.768 - 13856.183: 94.4619% ( 18) 00:11:25.691 13856.183 - 13918.598: 94.6376% ( 19) 00:11:25.691 13918.598 - 13981.013: 94.8595% ( 24) 00:11:25.691 13981.013 - 14043.429: 95.0166% ( 17) 00:11:25.691 14043.429 - 14105.844: 95.1831% ( 18) 00:11:25.691 14105.844 - 14168.259: 95.3125% ( 14) 00:11:25.691 14168.259 - 14230.674: 95.4512% ( 15) 00:11:25.691 14230.674 - 14293.090: 95.5806% ( 14) 00:11:25.691 14293.090 - 14355.505: 95.7193% ( 15) 00:11:25.691 14355.505 - 14417.920: 95.8672% ( 16) 00:11:25.691 14417.920 - 14480.335: 96.0337% ( 18) 00:11:25.691 14480.335 - 14542.750: 96.2093% ( 19) 00:11:25.691 14542.750 - 14605.166: 96.3572% ( 16) 00:11:25.691 14605.166 - 14667.581: 96.5237% ( 18) 00:11:25.691 14667.581 - 14729.996: 96.6531% ( 14) 00:11:25.691 14729.996 - 14792.411: 96.8010% ( 16) 00:11:25.691 14792.411 - 14854.827: 96.9582% ( 17) 00:11:25.691 14854.827 - 14917.242: 97.0876% ( 14) 00:11:25.691 14917.242 - 14979.657: 97.2078% ( 13) 00:11:25.691 14979.657 - 15042.072: 97.3558% ( 16) 00:11:25.691 15042.072 - 15104.488: 97.4945% ( 15) 00:11:25.691 15104.488 - 15166.903: 97.6424% ( 16) 00:11:25.691 15166.903 - 15229.318: 97.7626% ( 13) 00:11:25.691 15229.318 - 15291.733: 97.8735% ( 12) 00:11:25.691 15291.733 - 15354.149: 97.9475% ( 8) 00:11:25.691 15354.149 - 15416.564: 97.9752% ( 3) 00:11:25.691 15416.564 - 15478.979: 98.0214% ( 5) 00:11:25.691 15478.979 - 15541.394: 98.0492% ( 3) 00:11:25.691 15541.394 - 15603.810: 98.0862% ( 4) 00:11:25.691 15603.810 - 15666.225: 98.1232% ( 4) 00:11:25.691 15666.225 - 15728.640: 98.1786% ( 6) 00:11:25.691 15728.640 - 15791.055: 98.2341% ( 6) 00:11:25.691 15791.055 - 15853.470: 98.2896% ( 6) 00:11:25.691 15853.470 - 15915.886: 98.3358% ( 5) 00:11:25.691 15915.886 - 15978.301: 98.3820% ( 5) 00:11:25.691 15978.301 - 16103.131: 98.4745% ( 10) 00:11:25.691 16103.131 - 16227.962: 98.5484% ( 8) 00:11:25.691 16227.962 - 16352.792: 98.6132% ( 7) 00:11:25.691 16352.792 - 16477.623: 98.6779% ( 7) 00:11:25.691 16477.623 - 16602.453: 98.7334% ( 6) 00:11:25.691 16602.453 - 16727.284: 98.7981% ( 7) 00:11:25.691 16727.284 - 16852.114: 98.8166% ( 2) 00:11:25.691 38447.787 - 38697.448: 98.8443% ( 3) 00:11:25.691 38697.448 - 38947.109: 98.8998% ( 6) 00:11:25.691 38947.109 - 39196.770: 98.9460% ( 5) 00:11:25.691 39196.770 - 39446.430: 99.0015% ( 6) 00:11:25.691 39446.430 - 39696.091: 99.0570% ( 6) 00:11:25.691 39696.091 - 39945.752: 99.1124% ( 6) 00:11:25.691 39945.752 - 40195.413: 99.1679% ( 6) 00:11:25.691 40195.413 - 40445.074: 99.2141% ( 5) 00:11:25.691 40445.074 - 40694.735: 99.2696% ( 6) 00:11:25.691 40694.735 - 40944.396: 99.3158% ( 5) 00:11:25.691 40944.396 - 41194.057: 99.3713% ( 6) 00:11:25.691 41194.057 - 41443.718: 99.4083% ( 4) 00:11:25.691 46187.276 - 46436.937: 99.4453% ( 4) 00:11:25.691 46436.937 - 46686.598: 99.5007% ( 6) 00:11:25.691 46686.598 - 46936.259: 99.5562% ( 6) 00:11:25.691 46936.259 - 47185.920: 99.6024% ( 5) 00:11:25.691 47185.920 - 47435.581: 99.6487% ( 5) 00:11:25.691 47435.581 - 47685.242: 99.7041% ( 6) 00:11:25.691 47685.242 - 47934.903: 99.7504% ( 5) 00:11:25.691 47934.903 - 48184.564: 99.8058% ( 6) 00:11:25.691 48184.564 - 48434.225: 99.8613% ( 6) 00:11:25.691 48434.225 - 48683.886: 99.9075% ( 5) 00:11:25.691 48683.886 - 48933.547: 99.9630% ( 6) 00:11:25.691 48933.547 - 49183.208: 100.0000% ( 4) 00:11:25.691 00:11:25.691 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:25.691 ============================================================================== 00:11:25.691 Range in us Cumulative IO count 00:11:25.691 9237.455 - 9299.870: 0.0185% ( 2) 00:11:25.691 9299.870 - 9362.286: 0.0462% ( 3) 00:11:25.691 9362.286 - 9424.701: 0.0925% ( 5) 00:11:25.691 9424.701 - 9487.116: 0.1664% ( 8) 00:11:25.691 9487.116 - 9549.531: 0.2589% ( 10) 00:11:25.691 9549.531 - 9611.947: 0.3513% ( 10) 00:11:25.691 9611.947 - 9674.362: 0.4715% ( 13) 00:11:25.691 9674.362 - 9736.777: 0.6287% ( 17) 00:11:25.691 9736.777 - 9799.192: 0.7951% ( 18) 00:11:25.691 9799.192 - 9861.608: 0.9615% ( 18) 00:11:25.691 9861.608 - 9924.023: 1.1372% ( 19) 00:11:25.691 9924.023 - 9986.438: 1.4053% ( 29) 00:11:25.691 9986.438 - 10048.853: 1.7474% ( 37) 00:11:25.691 10048.853 - 10111.269: 2.3946% ( 70) 00:11:25.691 10111.269 - 10173.684: 3.3007% ( 98) 00:11:25.691 10173.684 - 10236.099: 4.5488% ( 135) 00:11:25.691 10236.099 - 10298.514: 6.4349% ( 204) 00:11:25.691 10298.514 - 10360.930: 8.4042% ( 213) 00:11:25.691 10360.930 - 10423.345: 10.5954% ( 237) 00:11:25.691 10423.345 - 10485.760: 13.1010% ( 271) 00:11:25.691 10485.760 - 10548.175: 15.7082% ( 282) 00:11:25.691 10548.175 - 10610.590: 18.5651% ( 309) 00:11:25.691 10610.590 - 10673.006: 21.4774% ( 315) 00:11:25.691 10673.006 - 10735.421: 24.4083% ( 317) 00:11:25.691 10735.421 - 10797.836: 27.4686% ( 331) 00:11:25.691 10797.836 - 10860.251: 30.6490% ( 344) 00:11:25.691 10860.251 - 10922.667: 33.9405% ( 356) 00:11:25.691 10922.667 - 10985.082: 37.3243% ( 366) 00:11:25.691 10985.082 - 11047.497: 40.8192% ( 378) 00:11:25.691 11047.497 - 11109.912: 44.3602% ( 383) 00:11:25.691 11109.912 - 11172.328: 48.0030% ( 394) 00:11:25.691 11172.328 - 11234.743: 51.6272% ( 392) 00:11:25.691 11234.743 - 11297.158: 55.0943% ( 375) 00:11:25.691 11297.158 - 11359.573: 58.2933% ( 346) 00:11:25.691 11359.573 - 11421.989: 61.1409% ( 308) 00:11:25.691 11421.989 - 11484.404: 63.6002% ( 266) 00:11:25.691 11484.404 - 11546.819: 65.5695% ( 213) 00:11:25.691 11546.819 - 11609.234: 67.3354% ( 191) 00:11:25.691 11609.234 - 11671.650: 69.1753% ( 199) 00:11:25.691 11671.650 - 11734.065: 71.0891% ( 207) 00:11:25.691 11734.065 - 11796.480: 72.8550% ( 191) 00:11:25.691 11796.480 - 11858.895: 74.5839% ( 187) 00:11:25.691 11858.895 - 11921.310: 76.2759% ( 183) 00:11:25.691 11921.310 - 11983.726: 77.7922% ( 164) 00:11:25.691 11983.726 - 12046.141: 79.2530% ( 158) 00:11:25.691 12046.141 - 12108.556: 80.5658% ( 142) 00:11:25.691 12108.556 - 12170.971: 81.8879% ( 143) 00:11:25.691 12170.971 - 12233.387: 83.2008% ( 142) 00:11:25.691 12233.387 - 12295.802: 84.3288% ( 122) 00:11:25.691 12295.802 - 12358.217: 85.3365% ( 109) 00:11:25.691 12358.217 - 12420.632: 86.2426% ( 98) 00:11:25.691 12420.632 - 12483.048: 87.0747% ( 90) 00:11:25.691 12483.048 - 12545.463: 87.8606% ( 85) 00:11:25.691 12545.463 - 12607.878: 88.6095% ( 81) 00:11:25.691 12607.878 - 12670.293: 89.2659% ( 71) 00:11:25.691 12670.293 - 12732.709: 89.8484% ( 63) 00:11:25.691 12732.709 - 12795.124: 90.3476% ( 54) 00:11:25.691 12795.124 - 12857.539: 90.7729% ( 46) 00:11:25.691 12857.539 - 12919.954: 91.1705% ( 43) 00:11:25.691 12919.954 - 12982.370: 91.4848% ( 34) 00:11:25.691 12982.370 - 13044.785: 91.7899% ( 33) 00:11:25.691 13044.785 - 13107.200: 92.0673% ( 30) 00:11:25.691 13107.200 - 13169.615: 92.2892% ( 24) 00:11:25.691 13169.615 - 13232.030: 92.5111% ( 24) 00:11:25.691 13232.030 - 13294.446: 92.7330% ( 24) 00:11:25.691 13294.446 - 13356.861: 92.9364% ( 22) 00:11:25.691 13356.861 - 13419.276: 93.1213% ( 20) 00:11:25.691 13419.276 - 13481.691: 93.3247% ( 22) 00:11:25.691 13481.691 - 13544.107: 93.4541% ( 14) 00:11:25.691 13544.107 - 13606.522: 93.5928% ( 15) 00:11:25.691 13606.522 - 13668.937: 93.7592% ( 18) 00:11:25.691 13668.937 - 13731.352: 93.8794% ( 13) 00:11:25.691 13731.352 - 13793.768: 94.0551% ( 19) 00:11:25.691 13793.768 - 13856.183: 94.2123% ( 17) 00:11:25.691 13856.183 - 13918.598: 94.4249% ( 23) 00:11:25.691 13918.598 - 13981.013: 94.6376% ( 23) 00:11:25.691 13981.013 - 14043.429: 94.8410% ( 22) 00:11:25.691 14043.429 - 14105.844: 95.0074% ( 18) 00:11:25.691 14105.844 - 14168.259: 95.1368% ( 14) 00:11:25.691 14168.259 - 14230.674: 95.2755% ( 15) 00:11:25.691 14230.674 - 14293.090: 95.4327% ( 17) 00:11:25.691 14293.090 - 14355.505: 95.5436% ( 12) 00:11:25.691 14355.505 - 14417.920: 95.6546% ( 12) 00:11:25.691 14417.920 - 14480.335: 95.8303% ( 19) 00:11:25.691 14480.335 - 14542.750: 95.9782% ( 16) 00:11:25.691 14542.750 - 14605.166: 96.1076% ( 14) 00:11:25.691 14605.166 - 14667.581: 96.2740% ( 18) 00:11:25.691 14667.581 - 14729.996: 96.4127% ( 15) 00:11:25.691 14729.996 - 14792.411: 96.5976% ( 20) 00:11:25.691 14792.411 - 14854.827: 96.7548% ( 17) 00:11:25.691 14854.827 - 14917.242: 96.9305% ( 19) 00:11:25.691 14917.242 - 14979.657: 97.0507% ( 13) 00:11:25.692 14979.657 - 15042.072: 97.1709% ( 13) 00:11:25.692 15042.072 - 15104.488: 97.2541% ( 9) 00:11:25.692 15104.488 - 15166.903: 97.3465% ( 10) 00:11:25.692 15166.903 - 15229.318: 97.4205% ( 8) 00:11:25.692 15229.318 - 15291.733: 97.4852% ( 7) 00:11:25.692 15291.733 - 15354.149: 97.5592% ( 8) 00:11:25.692 15354.149 - 15416.564: 97.6331% ( 8) 00:11:25.692 15416.564 - 15478.979: 97.7071% ( 8) 00:11:25.692 15478.979 - 15541.394: 97.7441% ( 4) 00:11:25.692 15541.394 - 15603.810: 97.7903% ( 5) 00:11:25.692 15603.810 - 15666.225: 97.8643% ( 8) 00:11:25.692 15666.225 - 15728.640: 97.9197% ( 6) 00:11:25.692 15728.640 - 15791.055: 98.0030% ( 9) 00:11:25.692 15791.055 - 15853.470: 98.0769% ( 8) 00:11:25.692 15853.470 - 15915.886: 98.1416% ( 7) 00:11:25.692 15915.886 - 15978.301: 98.1971% ( 6) 00:11:25.692 15978.301 - 16103.131: 98.3081% ( 12) 00:11:25.692 16103.131 - 16227.962: 98.3913% ( 9) 00:11:25.692 16227.962 - 16352.792: 98.4837% ( 10) 00:11:25.692 16352.792 - 16477.623: 98.5762% ( 10) 00:11:25.692 16477.623 - 16602.453: 98.6686% ( 10) 00:11:25.692 16602.453 - 16727.284: 98.7611% ( 10) 00:11:25.692 16727.284 - 16852.114: 98.8166% ( 6) 00:11:25.692 34702.872 - 34952.533: 98.8443% ( 3) 00:11:25.692 34952.533 - 35202.194: 98.8998% ( 6) 00:11:25.692 35202.194 - 35451.855: 98.9553% ( 6) 00:11:25.692 35451.855 - 35701.516: 99.0015% ( 5) 00:11:25.692 35701.516 - 35951.177: 99.0662% ( 7) 00:11:25.692 35951.177 - 36200.838: 99.1217% ( 6) 00:11:25.692 36200.838 - 36450.499: 99.1679% ( 5) 00:11:25.692 36450.499 - 36700.160: 99.2141% ( 5) 00:11:25.692 36700.160 - 36949.821: 99.2696% ( 6) 00:11:25.692 36949.821 - 37199.482: 99.3158% ( 5) 00:11:25.692 37199.482 - 37449.143: 99.3713% ( 6) 00:11:25.692 37449.143 - 37698.804: 99.4083% ( 4) 00:11:25.692 42442.362 - 42692.023: 99.4360% ( 3) 00:11:25.692 42692.023 - 42941.684: 99.4915% ( 6) 00:11:25.692 42941.684 - 43191.345: 99.5377% ( 5) 00:11:25.692 43191.345 - 43441.006: 99.6024% ( 7) 00:11:25.692 43441.006 - 43690.667: 99.6487% ( 5) 00:11:25.692 43690.667 - 43940.328: 99.6949% ( 5) 00:11:25.692 43940.328 - 44189.989: 99.7504% ( 6) 00:11:25.692 44189.989 - 44439.650: 99.7966% ( 5) 00:11:25.692 44439.650 - 44689.310: 99.8521% ( 6) 00:11:25.692 44689.310 - 44938.971: 99.9075% ( 6) 00:11:25.692 44938.971 - 45188.632: 99.9445% ( 4) 00:11:25.692 45188.632 - 45438.293: 100.0000% ( 6) 00:11:25.692 00:11:25.692 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:25.692 ============================================================================== 00:11:25.692 Range in us Cumulative IO count 00:11:25.692 9237.455 - 9299.870: 0.0460% ( 5) 00:11:25.692 9299.870 - 9362.286: 0.0827% ( 4) 00:11:25.692 9362.286 - 9424.701: 0.1287% ( 5) 00:11:25.692 9424.701 - 9487.116: 0.1838% ( 6) 00:11:25.692 9487.116 - 9549.531: 0.2574% ( 8) 00:11:25.692 9549.531 - 9611.947: 0.3768% ( 13) 00:11:25.692 9611.947 - 9674.362: 0.5147% ( 15) 00:11:25.692 9674.362 - 9736.777: 0.6158% ( 11) 00:11:25.692 9736.777 - 9799.192: 0.7721% ( 17) 00:11:25.692 9799.192 - 9861.608: 0.9375% ( 18) 00:11:25.692 9861.608 - 9924.023: 1.1029% ( 18) 00:11:25.692 9924.023 - 9986.438: 1.4338% ( 36) 00:11:25.692 9986.438 - 10048.853: 1.8199% ( 42) 00:11:25.692 10048.853 - 10111.269: 2.4081% ( 64) 00:11:25.692 10111.269 - 10173.684: 3.4007% ( 108) 00:11:25.692 10173.684 - 10236.099: 4.5680% ( 127) 00:11:25.692 10236.099 - 10298.514: 6.2776% ( 186) 00:11:25.692 10298.514 - 10360.930: 8.2169% ( 211) 00:11:25.692 10360.930 - 10423.345: 10.4320% ( 241) 00:11:25.692 10423.345 - 10485.760: 12.9136% ( 270) 00:11:25.692 10485.760 - 10548.175: 15.6434% ( 297) 00:11:25.692 10548.175 - 10610.590: 18.4191% ( 302) 00:11:25.692 10610.590 - 10673.006: 21.3971% ( 324) 00:11:25.692 10673.006 - 10735.421: 24.4853% ( 336) 00:11:25.692 10735.421 - 10797.836: 27.5827% ( 337) 00:11:25.692 10797.836 - 10860.251: 30.7721% ( 347) 00:11:25.692 10860.251 - 10922.667: 34.0074% ( 352) 00:11:25.692 10922.667 - 10985.082: 37.3989% ( 369) 00:11:25.692 10985.082 - 11047.497: 40.8088% ( 371) 00:11:25.692 11047.497 - 11109.912: 44.4026% ( 391) 00:11:25.692 11109.912 - 11172.328: 47.9412% ( 385) 00:11:25.692 11172.328 - 11234.743: 51.5074% ( 388) 00:11:25.692 11234.743 - 11297.158: 54.9540% ( 375) 00:11:25.692 11297.158 - 11359.573: 58.1893% ( 352) 00:11:25.692 11359.573 - 11421.989: 61.0754% ( 314) 00:11:25.692 11421.989 - 11484.404: 63.4191% ( 255) 00:11:25.692 11484.404 - 11546.819: 65.3493% ( 210) 00:11:25.692 11546.819 - 11609.234: 67.1415% ( 195) 00:11:25.692 11609.234 - 11671.650: 68.9246% ( 194) 00:11:25.692 11671.650 - 11734.065: 70.6710% ( 190) 00:11:25.692 11734.065 - 11796.480: 72.5551% ( 205) 00:11:25.692 11796.480 - 11858.895: 74.3290% ( 193) 00:11:25.692 11858.895 - 11921.310: 76.0478% ( 187) 00:11:25.692 11921.310 - 11983.726: 77.5919% ( 168) 00:11:25.692 11983.726 - 12046.141: 79.1085% ( 165) 00:11:25.692 12046.141 - 12108.556: 80.5607% ( 158) 00:11:25.692 12108.556 - 12170.971: 81.9026% ( 146) 00:11:25.692 12170.971 - 12233.387: 83.1526% ( 136) 00:11:25.692 12233.387 - 12295.802: 84.2555% ( 120) 00:11:25.692 12295.802 - 12358.217: 85.1838% ( 101) 00:11:25.692 12358.217 - 12420.632: 86.0846% ( 98) 00:11:25.692 12420.632 - 12483.048: 86.9761% ( 97) 00:11:25.692 12483.048 - 12545.463: 87.8401% ( 94) 00:11:25.692 12545.463 - 12607.878: 88.6121% ( 84) 00:11:25.692 12607.878 - 12670.293: 89.3199% ( 77) 00:11:25.692 12670.293 - 12732.709: 89.9081% ( 64) 00:11:25.692 12732.709 - 12795.124: 90.4228% ( 56) 00:11:25.692 12795.124 - 12857.539: 90.8548% ( 47) 00:11:25.692 12857.539 - 12919.954: 91.2868% ( 47) 00:11:25.692 12919.954 - 12982.370: 91.7096% ( 46) 00:11:25.692 12982.370 - 13044.785: 92.0404% ( 36) 00:11:25.692 13044.785 - 13107.200: 92.2610% ( 24) 00:11:25.692 13107.200 - 13169.615: 92.4540% ( 21) 00:11:25.692 13169.615 - 13232.030: 92.6654% ( 23) 00:11:25.692 13232.030 - 13294.446: 92.8493% ( 20) 00:11:25.692 13294.446 - 13356.861: 93.0055% ( 17) 00:11:25.692 13356.861 - 13419.276: 93.1801% ( 19) 00:11:25.692 13419.276 - 13481.691: 93.3364% ( 17) 00:11:25.692 13481.691 - 13544.107: 93.5294% ( 21) 00:11:25.692 13544.107 - 13606.522: 93.7224% ( 21) 00:11:25.692 13606.522 - 13668.937: 93.9246% ( 22) 00:11:25.692 13668.937 - 13731.352: 94.1268% ( 22) 00:11:25.692 13731.352 - 13793.768: 94.3107% ( 20) 00:11:25.692 13793.768 - 13856.183: 94.5037% ( 21) 00:11:25.692 13856.183 - 13918.598: 94.6967% ( 21) 00:11:25.692 13918.598 - 13981.013: 94.8438% ( 16) 00:11:25.692 13981.013 - 14043.429: 94.9816% ( 15) 00:11:25.692 14043.429 - 14105.844: 95.1195% ( 15) 00:11:25.692 14105.844 - 14168.259: 95.2665% ( 16) 00:11:25.692 14168.259 - 14230.674: 95.4228% ( 17) 00:11:25.692 14230.674 - 14293.090: 95.5699% ( 16) 00:11:25.692 14293.090 - 14355.505: 95.6893% ( 13) 00:11:25.692 14355.505 - 14417.920: 95.8180% ( 14) 00:11:25.692 14417.920 - 14480.335: 95.9191% ( 11) 00:11:25.692 14480.335 - 14542.750: 96.0386% ( 13) 00:11:25.692 14542.750 - 14605.166: 96.1397% ( 11) 00:11:25.692 14605.166 - 14667.581: 96.2592% ( 13) 00:11:25.692 14667.581 - 14729.996: 96.3787% ( 13) 00:11:25.692 14729.996 - 14792.411: 96.4890% ( 12) 00:11:25.692 14792.411 - 14854.827: 96.5993% ( 12) 00:11:25.692 14854.827 - 14917.242: 96.6820% ( 9) 00:11:25.692 14917.242 - 14979.657: 96.7647% ( 9) 00:11:25.692 14979.657 - 15042.072: 96.8474% ( 9) 00:11:25.692 15042.072 - 15104.488: 96.9485% ( 11) 00:11:25.692 15104.488 - 15166.903: 97.0129% ( 7) 00:11:25.692 15166.903 - 15229.318: 97.0956% ( 9) 00:11:25.692 15229.318 - 15291.733: 97.1875% ( 10) 00:11:25.692 15291.733 - 15354.149: 97.2518% ( 7) 00:11:25.692 15354.149 - 15416.564: 97.3162% ( 7) 00:11:25.692 15416.564 - 15478.979: 97.3897% ( 8) 00:11:25.692 15478.979 - 15541.394: 97.4265% ( 4) 00:11:25.692 15541.394 - 15603.810: 97.4724% ( 5) 00:11:25.692 15603.810 - 15666.225: 97.5184% ( 5) 00:11:25.692 15666.225 - 15728.640: 97.5827% ( 7) 00:11:25.692 15728.640 - 15791.055: 97.6562% ( 8) 00:11:25.692 15791.055 - 15853.470: 97.7206% ( 7) 00:11:25.692 15853.470 - 15915.886: 97.7849% ( 7) 00:11:25.692 15915.886 - 15978.301: 97.8585% ( 8) 00:11:25.692 15978.301 - 16103.131: 97.9963% ( 15) 00:11:25.692 16103.131 - 16227.962: 98.1526% ( 17) 00:11:25.692 16227.962 - 16352.792: 98.2904% ( 15) 00:11:25.692 16352.792 - 16477.623: 98.3824% ( 10) 00:11:25.692 16477.623 - 16602.453: 98.4926% ( 12) 00:11:25.692 16602.453 - 16727.284: 98.5846% ( 10) 00:11:25.692 16727.284 - 16852.114: 98.6857% ( 11) 00:11:25.692 16852.114 - 16976.945: 98.7316% ( 5) 00:11:25.692 16976.945 - 17101.775: 98.7776% ( 5) 00:11:25.692 17101.775 - 17226.606: 98.8143% ( 4) 00:11:25.692 17226.606 - 17351.436: 98.8235% ( 1) 00:11:25.692 24966.095 - 25090.926: 98.8327% ( 1) 00:11:25.692 25090.926 - 25215.756: 98.8511% ( 2) 00:11:25.692 25215.756 - 25340.587: 98.8787% ( 3) 00:11:25.692 25340.587 - 25465.417: 98.9062% ( 3) 00:11:25.692 25465.417 - 25590.248: 98.9246% ( 2) 00:11:25.692 25590.248 - 25715.078: 98.9522% ( 3) 00:11:25.692 25715.078 - 25839.909: 98.9706% ( 2) 00:11:25.692 25839.909 - 25964.739: 98.9982% ( 3) 00:11:25.692 25964.739 - 26089.570: 99.0165% ( 2) 00:11:25.692 26089.570 - 26214.400: 99.0349% ( 2) 00:11:25.692 26214.400 - 26339.230: 99.0625% ( 3) 00:11:25.692 26339.230 - 26464.061: 99.0901% ( 3) 00:11:25.692 26464.061 - 26588.891: 99.1085% ( 2) 00:11:25.692 26588.891 - 26713.722: 99.1268% ( 2) 00:11:25.692 26713.722 - 26838.552: 99.1544% ( 3) 00:11:25.692 26838.552 - 26963.383: 99.1820% ( 3) 00:11:25.692 26963.383 - 27088.213: 99.2096% ( 3) 00:11:25.692 27088.213 - 27213.044: 99.2371% ( 3) 00:11:25.692 27213.044 - 27337.874: 99.2555% ( 2) 00:11:25.692 27337.874 - 27462.705: 99.2831% ( 3) 00:11:25.692 27462.705 - 27587.535: 99.3015% ( 2) 00:11:25.692 27587.535 - 27712.366: 99.3290% ( 3) 00:11:25.692 27712.366 - 27837.196: 99.3566% ( 3) 00:11:25.692 27837.196 - 27962.027: 99.3750% ( 2) 00:11:25.692 27962.027 - 28086.857: 99.4026% ( 3) 00:11:25.692 28086.857 - 28211.688: 99.4118% ( 1) 00:11:25.692 33204.907 - 33454.568: 99.4485% ( 4) 00:11:25.692 33454.568 - 33704.229: 99.4945% ( 5) 00:11:25.692 33704.229 - 33953.890: 99.5496% ( 6) 00:11:25.692 33953.890 - 34203.550: 99.5772% ( 3) 00:11:25.692 34203.550 - 34453.211: 99.6415% ( 7) 00:11:25.692 34453.211 - 34702.872: 99.6875% ( 5) 00:11:25.693 34702.872 - 34952.533: 99.7243% ( 4) 00:11:25.693 34952.533 - 35202.194: 99.7702% ( 5) 00:11:25.693 35202.194 - 35451.855: 99.8254% ( 6) 00:11:25.693 35451.855 - 35701.516: 99.8805% ( 6) 00:11:25.693 35701.516 - 35951.177: 99.9265% ( 5) 00:11:25.693 35951.177 - 36200.838: 99.9816% ( 6) 00:11:25.693 36200.838 - 36450.499: 100.0000% ( 2) 00:11:25.693 00:11:25.693 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:25.693 ============================================================================== 00:11:25.693 Range in us Cumulative IO count 00:11:25.693 9237.455 - 9299.870: 0.0368% ( 4) 00:11:25.693 9299.870 - 9362.286: 0.0827% ( 5) 00:11:25.693 9362.286 - 9424.701: 0.1195% ( 4) 00:11:25.693 9424.701 - 9487.116: 0.1838% ( 7) 00:11:25.693 9487.116 - 9549.531: 0.2941% ( 12) 00:11:25.693 9549.531 - 9611.947: 0.4228% ( 14) 00:11:25.693 9611.947 - 9674.362: 0.5239% ( 11) 00:11:25.693 9674.362 - 9736.777: 0.6526% ( 14) 00:11:25.693 9736.777 - 9799.192: 0.7996% ( 16) 00:11:25.693 9799.192 - 9861.608: 0.9743% ( 19) 00:11:25.693 9861.608 - 9924.023: 1.1949% ( 24) 00:11:25.693 9924.023 - 9986.438: 1.5074% ( 34) 00:11:25.693 9986.438 - 10048.853: 1.9026% ( 43) 00:11:25.693 10048.853 - 10111.269: 2.5827% ( 74) 00:11:25.693 10111.269 - 10173.684: 3.5478% ( 105) 00:11:25.693 10173.684 - 10236.099: 4.7059% ( 126) 00:11:25.693 10236.099 - 10298.514: 6.3879% ( 183) 00:11:25.693 10298.514 - 10360.930: 8.2996% ( 208) 00:11:25.693 10360.930 - 10423.345: 10.6710% ( 258) 00:11:25.693 10423.345 - 10485.760: 13.1342% ( 268) 00:11:25.693 10485.760 - 10548.175: 15.8088% ( 291) 00:11:25.693 10548.175 - 10610.590: 18.5754% ( 301) 00:11:25.693 10610.590 - 10673.006: 21.4246% ( 310) 00:11:25.693 10673.006 - 10735.421: 24.4577% ( 330) 00:11:25.693 10735.421 - 10797.836: 27.5000% ( 331) 00:11:25.693 10797.836 - 10860.251: 30.5607% ( 333) 00:11:25.693 10860.251 - 10922.667: 33.8511% ( 358) 00:11:25.693 10922.667 - 10985.082: 37.2518% ( 370) 00:11:25.693 10985.082 - 11047.497: 40.6801% ( 373) 00:11:25.693 11047.497 - 11109.912: 44.3107% ( 395) 00:11:25.693 11109.912 - 11172.328: 48.0699% ( 409) 00:11:25.693 11172.328 - 11234.743: 51.6636% ( 391) 00:11:25.693 11234.743 - 11297.158: 55.2206% ( 387) 00:11:25.693 11297.158 - 11359.573: 58.4375% ( 350) 00:11:25.693 11359.573 - 11421.989: 61.4062% ( 323) 00:11:25.693 11421.989 - 11484.404: 63.8235% ( 263) 00:11:25.693 11484.404 - 11546.819: 65.9007% ( 226) 00:11:25.693 11546.819 - 11609.234: 67.6654% ( 192) 00:11:25.693 11609.234 - 11671.650: 69.3107% ( 179) 00:11:25.693 11671.650 - 11734.065: 70.9651% ( 180) 00:11:25.693 11734.065 - 11796.480: 72.6471% ( 183) 00:11:25.693 11796.480 - 11858.895: 74.3566% ( 186) 00:11:25.693 11858.895 - 11921.310: 76.0202% ( 181) 00:11:25.693 11921.310 - 11983.726: 77.6011% ( 172) 00:11:25.693 11983.726 - 12046.141: 79.1268% ( 166) 00:11:25.693 12046.141 - 12108.556: 80.5055% ( 150) 00:11:25.693 12108.556 - 12170.971: 81.6820% ( 128) 00:11:25.693 12170.971 - 12233.387: 82.7665% ( 118) 00:11:25.693 12233.387 - 12295.802: 83.8051% ( 113) 00:11:25.693 12295.802 - 12358.217: 84.7059% ( 98) 00:11:25.693 12358.217 - 12420.632: 85.6526% ( 103) 00:11:25.693 12420.632 - 12483.048: 86.5901% ( 102) 00:11:25.693 12483.048 - 12545.463: 87.4908% ( 98) 00:11:25.693 12545.463 - 12607.878: 88.3180% ( 90) 00:11:25.693 12607.878 - 12670.293: 89.0625% ( 81) 00:11:25.693 12670.293 - 12732.709: 89.7702% ( 77) 00:11:25.693 12732.709 - 12795.124: 90.3493% ( 63) 00:11:25.693 12795.124 - 12857.539: 90.8824% ( 58) 00:11:25.693 12857.539 - 12919.954: 91.2960% ( 45) 00:11:25.693 12919.954 - 12982.370: 91.6636% ( 40) 00:11:25.693 12982.370 - 13044.785: 92.0129% ( 38) 00:11:25.693 13044.785 - 13107.200: 92.2335% ( 24) 00:11:25.693 13107.200 - 13169.615: 92.4816% ( 27) 00:11:25.693 13169.615 - 13232.030: 92.7022% ( 24) 00:11:25.693 13232.030 - 13294.446: 92.8860% ( 20) 00:11:25.693 13294.446 - 13356.861: 93.0790% ( 21) 00:11:25.693 13356.861 - 13419.276: 93.2904% ( 23) 00:11:25.693 13419.276 - 13481.691: 93.4926% ( 22) 00:11:25.693 13481.691 - 13544.107: 93.6857% ( 21) 00:11:25.693 13544.107 - 13606.522: 93.8787% ( 21) 00:11:25.693 13606.522 - 13668.937: 94.0901% ( 23) 00:11:25.693 13668.937 - 13731.352: 94.3015% ( 23) 00:11:25.693 13731.352 - 13793.768: 94.4761% ( 19) 00:11:25.693 13793.768 - 13856.183: 94.6875% ( 23) 00:11:25.693 13856.183 - 13918.598: 94.8713% ( 20) 00:11:25.693 13918.598 - 13981.013: 95.0643% ( 21) 00:11:25.693 13981.013 - 14043.429: 95.1930% ( 14) 00:11:25.693 14043.429 - 14105.844: 95.3033% ( 12) 00:11:25.693 14105.844 - 14168.259: 95.3768% ( 8) 00:11:25.693 14168.259 - 14230.674: 95.4596% ( 9) 00:11:25.693 14230.674 - 14293.090: 95.5331% ( 8) 00:11:25.693 14293.090 - 14355.505: 95.5882% ( 6) 00:11:25.693 14355.505 - 14417.920: 95.6434% ( 6) 00:11:25.693 14417.920 - 14480.335: 95.7169% ( 8) 00:11:25.693 14480.335 - 14542.750: 95.7904% ( 8) 00:11:25.693 14542.750 - 14605.166: 95.8915% ( 11) 00:11:25.693 14605.166 - 14667.581: 95.9651% ( 8) 00:11:25.693 14667.581 - 14729.996: 96.0662% ( 11) 00:11:25.693 14729.996 - 14792.411: 96.1397% ( 8) 00:11:25.693 14792.411 - 14854.827: 96.2224% ( 9) 00:11:25.693 14854.827 - 14917.242: 96.3051% ( 9) 00:11:25.693 14917.242 - 14979.657: 96.4154% ( 12) 00:11:25.693 14979.657 - 15042.072: 96.5257% ( 12) 00:11:25.693 15042.072 - 15104.488: 96.6176% ( 10) 00:11:25.693 15104.488 - 15166.903: 96.7004% ( 9) 00:11:25.693 15166.903 - 15229.318: 96.8015% ( 11) 00:11:25.693 15229.318 - 15291.733: 96.8750% ( 8) 00:11:25.693 15291.733 - 15354.149: 96.9669% ( 10) 00:11:25.693 15354.149 - 15416.564: 97.0404% ( 8) 00:11:25.693 15416.564 - 15478.979: 97.1232% ( 9) 00:11:25.693 15478.979 - 15541.394: 97.2059% ( 9) 00:11:25.693 15541.394 - 15603.810: 97.2518% ( 5) 00:11:25.693 15603.810 - 15666.225: 97.2794% ( 3) 00:11:25.693 15666.225 - 15728.640: 97.3346% ( 6) 00:11:25.693 15728.640 - 15791.055: 97.3805% ( 5) 00:11:25.693 15791.055 - 15853.470: 97.4081% ( 3) 00:11:25.693 15853.470 - 15915.886: 97.4724% ( 7) 00:11:25.693 15915.886 - 15978.301: 97.5551% ( 9) 00:11:25.693 15978.301 - 16103.131: 97.6838% ( 14) 00:11:25.693 16103.131 - 16227.962: 97.8217% ( 15) 00:11:25.693 16227.962 - 16352.792: 97.9504% ( 14) 00:11:25.693 16352.792 - 16477.623: 98.0882% ( 15) 00:11:25.693 16477.623 - 16602.453: 98.2445% ( 17) 00:11:25.693 16602.453 - 16727.284: 98.3640% ( 13) 00:11:25.693 16727.284 - 16852.114: 98.4835% ( 13) 00:11:25.693 16852.114 - 16976.945: 98.5570% ( 8) 00:11:25.693 16976.945 - 17101.775: 98.5938% ( 4) 00:11:25.693 17101.775 - 17226.606: 98.6305% ( 4) 00:11:25.693 17226.606 - 17351.436: 98.6673% ( 4) 00:11:25.693 17351.436 - 17476.267: 98.6949% ( 3) 00:11:25.693 17476.267 - 17601.097: 98.7316% ( 4) 00:11:25.693 17601.097 - 17725.928: 98.7592% ( 3) 00:11:25.693 17725.928 - 17850.758: 98.7960% ( 4) 00:11:25.693 17850.758 - 17975.589: 98.8235% ( 3) 00:11:25.693 21346.011 - 21470.842: 98.8327% ( 1) 00:11:25.693 21470.842 - 21595.672: 98.8603% ( 3) 00:11:25.693 21595.672 - 21720.503: 98.8787% ( 2) 00:11:25.693 21720.503 - 21845.333: 98.9062% ( 3) 00:11:25.693 21845.333 - 21970.164: 98.9338% ( 3) 00:11:25.693 21970.164 - 22094.994: 98.9522% ( 2) 00:11:25.693 22094.994 - 22219.825: 98.9798% ( 3) 00:11:25.693 22219.825 - 22344.655: 98.9982% ( 2) 00:11:25.693 22344.655 - 22469.486: 99.0165% ( 2) 00:11:25.693 22469.486 - 22594.316: 99.0441% ( 3) 00:11:25.693 22594.316 - 22719.147: 99.0717% ( 3) 00:11:25.693 22719.147 - 22843.977: 99.0901% ( 2) 00:11:25.693 22843.977 - 22968.808: 99.1176% ( 3) 00:11:25.693 22968.808 - 23093.638: 99.1452% ( 3) 00:11:25.693 23093.638 - 23218.469: 99.1728% ( 3) 00:11:25.693 23218.469 - 23343.299: 99.1912% ( 2) 00:11:25.693 23343.299 - 23468.130: 99.2188% ( 3) 00:11:25.693 23468.130 - 23592.960: 99.2371% ( 2) 00:11:25.693 23592.960 - 23717.790: 99.2647% ( 3) 00:11:25.693 23717.790 - 23842.621: 99.2831% ( 2) 00:11:25.693 23842.621 - 23967.451: 99.3107% ( 3) 00:11:25.693 23967.451 - 24092.282: 99.3290% ( 2) 00:11:25.693 24092.282 - 24217.112: 99.3566% ( 3) 00:11:25.693 24217.112 - 24341.943: 99.3842% ( 3) 00:11:25.693 24341.943 - 24466.773: 99.4118% ( 3) 00:11:25.694 29459.992 - 29584.823: 99.4301% ( 2) 00:11:25.694 29584.823 - 29709.653: 99.4485% ( 2) 00:11:25.694 29709.653 - 29834.484: 99.4761% ( 3) 00:11:25.694 29834.484 - 29959.314: 99.5037% ( 3) 00:11:25.694 29959.314 - 30084.145: 99.5312% ( 3) 00:11:25.694 30084.145 - 30208.975: 99.5588% ( 3) 00:11:25.694 30208.975 - 30333.806: 99.5772% ( 2) 00:11:25.694 30333.806 - 30458.636: 99.6048% ( 3) 00:11:25.694 30458.636 - 30583.467: 99.6232% ( 2) 00:11:25.694 30583.467 - 30708.297: 99.6507% ( 3) 00:11:25.694 30708.297 - 30833.128: 99.6783% ( 3) 00:11:25.694 30833.128 - 30957.958: 99.6967% ( 2) 00:11:25.694 30957.958 - 31082.789: 99.7243% ( 3) 00:11:25.694 31082.789 - 31207.619: 99.7518% ( 3) 00:11:25.694 31207.619 - 31332.450: 99.7702% ( 2) 00:11:25.694 31332.450 - 31457.280: 99.7978% ( 3) 00:11:25.694 31457.280 - 31582.110: 99.8254% ( 3) 00:11:25.694 31582.110 - 31706.941: 99.8529% ( 3) 00:11:25.694 31706.941 - 31831.771: 99.8805% ( 3) 00:11:25.694 31831.771 - 31956.602: 99.9081% ( 3) 00:11:25.694 31956.602 - 32206.263: 99.9540% ( 5) 00:11:25.694 32206.263 - 32455.924: 100.0000% ( 5) 00:11:25.694 00:11:25.694 19:33:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:27.101 Initializing NVMe Controllers 00:11:27.101 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:27.101 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:27.101 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:27.101 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:27.101 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:27.101 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:27.101 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:27.101 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:27.101 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:27.101 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:27.101 Initialization complete. Launching workers. 00:11:27.101 ======================================================== 00:11:27.101 Latency(us) 00:11:27.101 Device Information : IOPS MiB/s Average min max 00:11:27.101 PCIE (0000:00:10.0) NSID 1 from core 0: 9732.50 114.05 13180.10 9670.37 50600.52 00:11:27.101 PCIE (0000:00:11.0) NSID 1 from core 0: 9732.50 114.05 13140.21 9907.08 46873.76 00:11:27.101 PCIE (0000:00:13.0) NSID 1 from core 0: 9732.50 114.05 13103.20 9824.18 44330.42 00:11:27.101 PCIE (0000:00:12.0) NSID 1 from core 0: 9732.50 114.05 13066.28 9824.14 41058.40 00:11:27.101 PCIE (0000:00:12.0) NSID 2 from core 0: 9732.50 114.05 13030.43 9743.18 38344.21 00:11:27.101 PCIE (0000:00:12.0) NSID 3 from core 0: 9796.11 114.80 12910.28 9905.76 28378.12 00:11:27.101 ======================================================== 00:11:27.101 Total : 58458.58 685.06 13071.57 9670.37 50600.52 00:11:27.101 00:11:27.101 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:27.101 ================================================================================= 00:11:27.101 1.00000% : 10423.345us 00:11:27.101 10.00000% : 11546.819us 00:11:27.101 25.00000% : 12108.556us 00:11:27.101 50.00000% : 12795.124us 00:11:27.101 75.00000% : 13419.276us 00:11:27.101 90.00000% : 14168.259us 00:11:27.101 95.00000% : 14917.242us 00:11:27.101 98.00000% : 15728.640us 00:11:27.101 99.00000% : 40445.074us 00:11:27.101 99.50000% : 48434.225us 00:11:27.101 99.90000% : 50181.851us 00:11:27.101 99.99000% : 50681.173us 00:11:27.101 99.99900% : 50681.173us 00:11:27.101 99.99990% : 50681.173us 00:11:27.101 99.99999% : 50681.173us 00:11:27.101 00:11:27.101 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:27.101 ================================================================================= 00:11:27.101 1.00000% : 10423.345us 00:11:27.101 10.00000% : 11609.234us 00:11:27.101 25.00000% : 12170.971us 00:11:27.101 50.00000% : 12795.124us 00:11:27.101 75.00000% : 13356.861us 00:11:27.101 90.00000% : 14105.844us 00:11:27.101 95.00000% : 15042.072us 00:11:27.101 98.00000% : 15666.225us 00:11:27.101 99.00000% : 37199.482us 00:11:27.101 99.50000% : 44938.971us 00:11:27.101 99.90000% : 46686.598us 00:11:27.101 99.99000% : 46936.259us 00:11:27.101 99.99900% : 46936.259us 00:11:27.101 99.99990% : 46936.259us 00:11:27.101 99.99999% : 46936.259us 00:11:27.101 00:11:27.101 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:27.101 ================================================================================= 00:11:27.101 1.00000% : 10360.930us 00:11:27.101 10.00000% : 11609.234us 00:11:27.102 25.00000% : 12170.971us 00:11:27.102 50.00000% : 12732.709us 00:11:27.102 75.00000% : 13356.861us 00:11:27.102 90.00000% : 14105.844us 00:11:27.102 95.00000% : 14979.657us 00:11:27.102 98.00000% : 15666.225us 00:11:27.102 99.00000% : 34702.872us 00:11:27.102 99.50000% : 42442.362us 00:11:27.102 99.90000% : 44189.989us 00:11:27.102 99.99000% : 44439.650us 00:11:27.102 99.99900% : 44439.650us 00:11:27.102 99.99990% : 44439.650us 00:11:27.102 99.99999% : 44439.650us 00:11:27.102 00:11:27.102 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:27.102 ================================================================================= 00:11:27.102 1.00000% : 10485.760us 00:11:27.102 10.00000% : 11609.234us 00:11:27.102 25.00000% : 12170.971us 00:11:27.102 50.00000% : 12795.124us 00:11:27.102 75.00000% : 13356.861us 00:11:27.102 90.00000% : 14105.844us 00:11:27.102 95.00000% : 14979.657us 00:11:27.102 98.00000% : 15666.225us 00:11:27.102 99.00000% : 31207.619us 00:11:27.102 99.50000% : 39196.770us 00:11:27.102 99.90000% : 40694.735us 00:11:27.102 99.99000% : 41194.057us 00:11:27.102 99.99900% : 41194.057us 00:11:27.102 99.99990% : 41194.057us 00:11:27.102 99.99999% : 41194.057us 00:11:27.102 00:11:27.102 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:27.102 ================================================================================= 00:11:27.102 1.00000% : 10485.760us 00:11:27.102 10.00000% : 11609.234us 00:11:27.102 25.00000% : 12170.971us 00:11:27.102 50.00000% : 12795.124us 00:11:27.102 75.00000% : 13356.861us 00:11:27.102 90.00000% : 14168.259us 00:11:27.102 95.00000% : 14979.657us 00:11:27.102 98.00000% : 15728.640us 00:11:27.102 99.00000% : 28086.857us 00:11:27.102 99.50000% : 36200.838us 00:11:27.102 99.90000% : 37948.465us 00:11:27.102 99.99000% : 38447.787us 00:11:27.102 99.99900% : 38447.787us 00:11:27.102 99.99990% : 38447.787us 00:11:27.102 99.99999% : 38447.787us 00:11:27.102 00:11:27.102 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:27.102 ================================================================================= 00:11:27.102 1.00000% : 10485.760us 00:11:27.102 10.00000% : 11609.234us 00:11:27.102 25.00000% : 12170.971us 00:11:27.102 50.00000% : 12795.124us 00:11:27.102 75.00000% : 13356.861us 00:11:27.102 90.00000% : 14168.259us 00:11:27.102 95.00000% : 14979.657us 00:11:27.102 98.00000% : 15728.640us 00:11:27.102 99.00000% : 18974.232us 00:11:27.102 99.50000% : 26339.230us 00:11:27.102 99.90000% : 28086.857us 00:11:27.102 99.99000% : 28461.349us 00:11:27.102 99.99900% : 28461.349us 00:11:27.102 99.99990% : 28461.349us 00:11:27.102 99.99999% : 28461.349us 00:11:27.102 00:11:27.102 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:27.102 ============================================================================== 00:11:27.102 Range in us Cumulative IO count 00:11:27.102 9611.947 - 9674.362: 0.0102% ( 1) 00:11:27.102 9674.362 - 9736.777: 0.1021% ( 9) 00:11:27.102 9736.777 - 9799.192: 0.2247% ( 12) 00:11:27.102 9799.192 - 9861.608: 0.2757% ( 5) 00:11:27.102 9861.608 - 9924.023: 0.3676% ( 9) 00:11:27.102 9924.023 - 9986.438: 0.4391% ( 7) 00:11:27.102 9986.438 - 10048.853: 0.5310% ( 9) 00:11:27.102 10048.853 - 10111.269: 0.6025% ( 7) 00:11:27.102 10111.269 - 10173.684: 0.6842% ( 8) 00:11:27.102 10173.684 - 10236.099: 0.7659% ( 8) 00:11:27.102 10236.099 - 10298.514: 0.8272% ( 6) 00:11:27.102 10298.514 - 10360.930: 0.9191% ( 9) 00:11:27.102 10360.930 - 10423.345: 1.0723% ( 15) 00:11:27.102 10423.345 - 10485.760: 1.1744% ( 10) 00:11:27.102 10485.760 - 10548.175: 1.2663% ( 9) 00:11:27.102 10548.175 - 10610.590: 1.3685% ( 10) 00:11:27.102 10610.590 - 10673.006: 1.6340% ( 26) 00:11:27.102 10673.006 - 10735.421: 1.8382% ( 20) 00:11:27.102 10735.421 - 10797.836: 2.0629% ( 22) 00:11:27.102 10797.836 - 10860.251: 2.3284% ( 26) 00:11:27.102 10860.251 - 10922.667: 2.7165% ( 38) 00:11:27.102 10922.667 - 10985.082: 3.2373% ( 51) 00:11:27.102 10985.082 - 11047.497: 3.8092% ( 56) 00:11:27.102 11047.497 - 11109.912: 4.2994% ( 48) 00:11:27.102 11109.912 - 11172.328: 4.8611% ( 55) 00:11:27.102 11172.328 - 11234.743: 5.5862% ( 71) 00:11:27.102 11234.743 - 11297.158: 6.3419% ( 74) 00:11:27.102 11297.158 - 11359.573: 7.1385% ( 78) 00:11:27.102 11359.573 - 11421.989: 8.0372% ( 88) 00:11:27.102 11421.989 - 11484.404: 9.0891% ( 103) 00:11:27.102 11484.404 - 11546.819: 10.0592% ( 95) 00:11:27.102 11546.819 - 11609.234: 11.3154% ( 123) 00:11:27.102 11609.234 - 11671.650: 12.7451% ( 140) 00:11:27.102 11671.650 - 11734.065: 14.1136% ( 134) 00:11:27.102 11734.065 - 11796.480: 15.7067% ( 156) 00:11:27.102 11796.480 - 11858.895: 17.3815% ( 164) 00:11:27.102 11858.895 - 11921.310: 19.3117% ( 189) 00:11:27.102 11921.310 - 11983.726: 21.2929% ( 194) 00:11:27.102 11983.726 - 12046.141: 23.2230% ( 189) 00:11:27.102 12046.141 - 12108.556: 25.0715% ( 181) 00:11:27.102 12108.556 - 12170.971: 27.3386% ( 222) 00:11:27.102 12170.971 - 12233.387: 29.6467% ( 226) 00:11:27.102 12233.387 - 12295.802: 32.0772% ( 238) 00:11:27.102 12295.802 - 12358.217: 34.5180% ( 239) 00:11:27.102 12358.217 - 12420.632: 37.0813% ( 251) 00:11:27.102 12420.632 - 12483.048: 39.5527% ( 242) 00:11:27.102 12483.048 - 12545.463: 42.0445% ( 244) 00:11:27.102 12545.463 - 12607.878: 44.5976% ( 250) 00:11:27.102 12607.878 - 12670.293: 47.0792% ( 243) 00:11:27.102 12670.293 - 12732.709: 49.8877% ( 275) 00:11:27.102 12732.709 - 12795.124: 52.6246% ( 268) 00:11:27.102 12795.124 - 12857.539: 55.3717% ( 269) 00:11:27.102 12857.539 - 12919.954: 57.9350% ( 251) 00:11:27.102 12919.954 - 12982.370: 60.4269% ( 244) 00:11:27.102 12982.370 - 13044.785: 62.8676% ( 239) 00:11:27.102 13044.785 - 13107.200: 65.4003% ( 248) 00:11:27.102 13107.200 - 13169.615: 67.7083% ( 226) 00:11:27.102 13169.615 - 13232.030: 69.9551% ( 220) 00:11:27.102 13232.030 - 13294.446: 72.0588% ( 206) 00:11:27.102 13294.446 - 13356.861: 74.1319% ( 203) 00:11:27.102 13356.861 - 13419.276: 76.0110% ( 184) 00:11:27.102 13419.276 - 13481.691: 77.7982% ( 175) 00:11:27.102 13481.691 - 13544.107: 79.4833% ( 165) 00:11:27.102 13544.107 - 13606.522: 80.9436% ( 143) 00:11:27.102 13606.522 - 13668.937: 82.4244% ( 145) 00:11:27.102 13668.937 - 13731.352: 83.8848% ( 143) 00:11:27.102 13731.352 - 13793.768: 85.2022% ( 129) 00:11:27.102 13793.768 - 13856.183: 86.4992% ( 127) 00:11:27.102 13856.183 - 13918.598: 87.4694% ( 95) 00:11:27.102 13918.598 - 13981.013: 88.3170% ( 83) 00:11:27.102 13981.013 - 14043.429: 89.1748% ( 84) 00:11:27.102 14043.429 - 14105.844: 89.8386% ( 65) 00:11:27.102 14105.844 - 14168.259: 90.4514% ( 60) 00:11:27.102 14168.259 - 14230.674: 91.0641% ( 60) 00:11:27.102 14230.674 - 14293.090: 91.4828% ( 41) 00:11:27.102 14293.090 - 14355.505: 91.9424% ( 45) 00:11:27.102 14355.505 - 14417.920: 92.2896% ( 34) 00:11:27.102 14417.920 - 14480.335: 92.7185% ( 42) 00:11:27.102 14480.335 - 14542.750: 93.1475% ( 42) 00:11:27.102 14542.750 - 14605.166: 93.5458% ( 39) 00:11:27.102 14605.166 - 14667.581: 93.9236% ( 37) 00:11:27.102 14667.581 - 14729.996: 94.2300% ( 30) 00:11:27.102 14729.996 - 14792.411: 94.5159% ( 28) 00:11:27.102 14792.411 - 14854.827: 94.7815% ( 26) 00:11:27.102 14854.827 - 14917.242: 95.0470% ( 26) 00:11:27.102 14917.242 - 14979.657: 95.3533% ( 30) 00:11:27.102 14979.657 - 15042.072: 95.6597% ( 30) 00:11:27.102 15042.072 - 15104.488: 95.9559% ( 29) 00:11:27.102 15104.488 - 15166.903: 96.2827% ( 32) 00:11:27.102 15166.903 - 15229.318: 96.5686% ( 28) 00:11:27.102 15229.318 - 15291.733: 96.8035% ( 23) 00:11:27.102 15291.733 - 15354.149: 97.0078% ( 20) 00:11:27.102 15354.149 - 15416.564: 97.2426% ( 23) 00:11:27.102 15416.564 - 15478.979: 97.4367% ( 19) 00:11:27.102 15478.979 - 15541.394: 97.6511% ( 21) 00:11:27.102 15541.394 - 15603.810: 97.7839% ( 13) 00:11:27.102 15603.810 - 15666.225: 97.9779% ( 19) 00:11:27.102 15666.225 - 15728.640: 98.1311% ( 15) 00:11:27.102 15728.640 - 15791.055: 98.2639% ( 13) 00:11:27.102 15791.055 - 15853.470: 98.3967% ( 13) 00:11:27.102 15853.470 - 15915.886: 98.4988% ( 10) 00:11:27.102 15915.886 - 15978.301: 98.5498% ( 5) 00:11:27.102 15978.301 - 16103.131: 98.6417% ( 9) 00:11:27.102 16103.131 - 16227.962: 98.6928% ( 5) 00:11:27.102 38947.109 - 39196.770: 98.7541% ( 6) 00:11:27.102 39196.770 - 39446.430: 98.7949% ( 4) 00:11:27.102 39446.430 - 39696.091: 98.8562% ( 6) 00:11:27.102 39696.091 - 39945.752: 98.9073% ( 5) 00:11:27.103 39945.752 - 40195.413: 98.9583% ( 5) 00:11:27.103 40195.413 - 40445.074: 99.0094% ( 5) 00:11:27.103 40445.074 - 40694.735: 99.0707% ( 6) 00:11:27.103 40694.735 - 40944.396: 99.1217% ( 5) 00:11:27.103 40944.396 - 41194.057: 99.1728% ( 5) 00:11:27.103 41194.057 - 41443.718: 99.2341% ( 6) 00:11:27.103 41443.718 - 41693.379: 99.2851% ( 5) 00:11:27.103 41693.379 - 41943.040: 99.3362% ( 5) 00:11:27.103 41943.040 - 42192.701: 99.3464% ( 1) 00:11:27.103 47435.581 - 47685.242: 99.3770% ( 3) 00:11:27.103 47685.242 - 47934.903: 99.4281% ( 5) 00:11:27.103 47934.903 - 48184.564: 99.4792% ( 5) 00:11:27.103 48184.564 - 48434.225: 99.5404% ( 6) 00:11:27.103 48434.225 - 48683.886: 99.5915% ( 5) 00:11:27.103 48683.886 - 48933.547: 99.6426% ( 5) 00:11:27.103 48933.547 - 49183.208: 99.6936% ( 5) 00:11:27.103 49183.208 - 49432.869: 99.7447% ( 5) 00:11:27.103 49432.869 - 49682.530: 99.8060% ( 6) 00:11:27.103 49682.530 - 49932.190: 99.8570% ( 5) 00:11:27.103 49932.190 - 50181.851: 99.9081% ( 5) 00:11:27.103 50181.851 - 50431.512: 99.9694% ( 6) 00:11:27.103 50431.512 - 50681.173: 100.0000% ( 3) 00:11:27.103 00:11:27.103 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:27.103 ============================================================================== 00:11:27.103 Range in us Cumulative IO count 00:11:27.103 9861.608 - 9924.023: 0.0204% ( 2) 00:11:27.103 9924.023 - 9986.438: 0.1225% ( 10) 00:11:27.103 9986.438 - 10048.853: 0.2655% ( 14) 00:11:27.103 10048.853 - 10111.269: 0.3676% ( 10) 00:11:27.103 10111.269 - 10173.684: 0.5310% ( 16) 00:11:27.103 10173.684 - 10236.099: 0.6842% ( 15) 00:11:27.103 10236.099 - 10298.514: 0.8374% ( 15) 00:11:27.103 10298.514 - 10360.930: 0.9498% ( 11) 00:11:27.103 10360.930 - 10423.345: 1.0621% ( 11) 00:11:27.103 10423.345 - 10485.760: 1.2459% ( 18) 00:11:27.103 10485.760 - 10548.175: 1.3276% ( 8) 00:11:27.103 10548.175 - 10610.590: 1.4604% ( 13) 00:11:27.103 10610.590 - 10673.006: 1.5829% ( 12) 00:11:27.103 10673.006 - 10735.421: 1.7667% ( 18) 00:11:27.103 10735.421 - 10797.836: 2.0629% ( 29) 00:11:27.103 10797.836 - 10860.251: 2.3693% ( 30) 00:11:27.103 10860.251 - 10922.667: 2.5735% ( 20) 00:11:27.103 10922.667 - 10985.082: 2.7982% ( 22) 00:11:27.103 10985.082 - 11047.497: 3.0637% ( 26) 00:11:27.103 11047.497 - 11109.912: 3.4416% ( 37) 00:11:27.103 11109.912 - 11172.328: 3.7990% ( 35) 00:11:27.103 11172.328 - 11234.743: 4.4526% ( 64) 00:11:27.103 11234.743 - 11297.158: 5.3105% ( 84) 00:11:27.103 11297.158 - 11359.573: 6.1887% ( 86) 00:11:27.103 11359.573 - 11421.989: 7.1895% ( 98) 00:11:27.103 11421.989 - 11484.404: 8.2108% ( 100) 00:11:27.103 11484.404 - 11546.819: 9.3546% ( 112) 00:11:27.103 11546.819 - 11609.234: 10.6209% ( 124) 00:11:27.103 11609.234 - 11671.650: 12.1324% ( 148) 00:11:27.103 11671.650 - 11734.065: 13.5008% ( 134) 00:11:27.103 11734.065 - 11796.480: 14.7161% ( 119) 00:11:27.103 11796.480 - 11858.895: 16.1152% ( 137) 00:11:27.103 11858.895 - 11921.310: 17.6879% ( 154) 00:11:27.103 11921.310 - 11983.726: 19.5466% ( 182) 00:11:27.103 11983.726 - 12046.141: 21.5891% ( 200) 00:11:27.103 12046.141 - 12108.556: 23.8868% ( 225) 00:11:27.103 12108.556 - 12170.971: 26.5217% ( 258) 00:11:27.103 12170.971 - 12233.387: 29.1054% ( 253) 00:11:27.103 12233.387 - 12295.802: 31.5257% ( 237) 00:11:27.103 12295.802 - 12358.217: 34.1503% ( 257) 00:11:27.103 12358.217 - 12420.632: 36.5502% ( 235) 00:11:27.103 12420.632 - 12483.048: 39.1544% ( 255) 00:11:27.103 12483.048 - 12545.463: 41.9833% ( 277) 00:11:27.103 12545.463 - 12607.878: 44.6691% ( 263) 00:11:27.103 12607.878 - 12670.293: 47.1916% ( 247) 00:11:27.103 12670.293 - 12732.709: 49.6732% ( 243) 00:11:27.103 12732.709 - 12795.124: 52.3284% ( 260) 00:11:27.103 12795.124 - 12857.539: 55.3513% ( 296) 00:11:27.103 12857.539 - 12919.954: 58.3435% ( 293) 00:11:27.103 12919.954 - 12982.370: 61.1724% ( 277) 00:11:27.103 12982.370 - 13044.785: 64.0421% ( 281) 00:11:27.103 13044.785 - 13107.200: 66.9016% ( 280) 00:11:27.103 13107.200 - 13169.615: 69.3525% ( 240) 00:11:27.103 13169.615 - 13232.030: 71.5584% ( 216) 00:11:27.103 13232.030 - 13294.446: 73.8971% ( 229) 00:11:27.103 13294.446 - 13356.861: 75.8374% ( 190) 00:11:27.103 13356.861 - 13419.276: 77.7982% ( 192) 00:11:27.103 13419.276 - 13481.691: 79.5445% ( 171) 00:11:27.103 13481.691 - 13544.107: 81.1377% ( 156) 00:11:27.103 13544.107 - 13606.522: 82.5776% ( 141) 00:11:27.103 13606.522 - 13668.937: 83.9767% ( 137) 00:11:27.103 13668.937 - 13731.352: 85.2635% ( 126) 00:11:27.103 13731.352 - 13793.768: 86.3256% ( 104) 00:11:27.103 13793.768 - 13856.183: 87.1834% ( 84) 00:11:27.103 13856.183 - 13918.598: 88.0106% ( 81) 00:11:27.103 13918.598 - 13981.013: 88.8685% ( 84) 00:11:27.103 13981.013 - 14043.429: 89.6038% ( 72) 00:11:27.103 14043.429 - 14105.844: 90.2063% ( 59) 00:11:27.103 14105.844 - 14168.259: 90.7680% ( 55) 00:11:27.103 14168.259 - 14230.674: 91.1560% ( 38) 00:11:27.103 14230.674 - 14293.090: 91.4420% ( 28) 00:11:27.103 14293.090 - 14355.505: 91.7279% ( 28) 00:11:27.103 14355.505 - 14417.920: 92.0139% ( 28) 00:11:27.103 14417.920 - 14480.335: 92.3100% ( 29) 00:11:27.103 14480.335 - 14542.750: 92.5756% ( 26) 00:11:27.103 14542.750 - 14605.166: 92.8411% ( 26) 00:11:27.103 14605.166 - 14667.581: 93.1475% ( 30) 00:11:27.103 14667.581 - 14729.996: 93.4743% ( 32) 00:11:27.103 14729.996 - 14792.411: 93.7602% ( 28) 00:11:27.103 14792.411 - 14854.827: 94.0564% ( 29) 00:11:27.103 14854.827 - 14917.242: 94.3627% ( 30) 00:11:27.103 14917.242 - 14979.657: 94.8019% ( 43) 00:11:27.103 14979.657 - 15042.072: 95.2410% ( 43) 00:11:27.103 15042.072 - 15104.488: 95.6087% ( 36) 00:11:27.103 15104.488 - 15166.903: 95.9457% ( 33) 00:11:27.103 15166.903 - 15229.318: 96.2520% ( 30) 00:11:27.103 15229.318 - 15291.733: 96.5788% ( 32) 00:11:27.103 15291.733 - 15354.149: 96.9363% ( 35) 00:11:27.103 15354.149 - 15416.564: 97.1507% ( 21) 00:11:27.103 15416.564 - 15478.979: 97.3958% ( 24) 00:11:27.103 15478.979 - 15541.394: 97.6001% ( 20) 00:11:27.103 15541.394 - 15603.810: 97.8350% ( 23) 00:11:27.103 15603.810 - 15666.225: 98.0494% ( 21) 00:11:27.103 15666.225 - 15728.640: 98.2128% ( 16) 00:11:27.103 15728.640 - 15791.055: 98.3456% ( 13) 00:11:27.103 15791.055 - 15853.470: 98.4477% ( 10) 00:11:27.103 15853.470 - 15915.886: 98.5498% ( 10) 00:11:27.103 15915.886 - 15978.301: 98.6009% ( 5) 00:11:27.103 15978.301 - 16103.131: 98.6724% ( 7) 00:11:27.103 16103.131 - 16227.962: 98.6928% ( 2) 00:11:27.103 35701.516 - 35951.177: 98.7234% ( 3) 00:11:27.103 35951.177 - 36200.838: 98.7847% ( 6) 00:11:27.103 36200.838 - 36450.499: 98.8460% ( 6) 00:11:27.103 36450.499 - 36700.160: 98.9073% ( 6) 00:11:27.103 36700.160 - 36949.821: 98.9788% ( 7) 00:11:27.103 36949.821 - 37199.482: 99.0298% ( 5) 00:11:27.103 37199.482 - 37449.143: 99.0911% ( 6) 00:11:27.103 37449.143 - 37698.804: 99.1524% ( 6) 00:11:27.103 37698.804 - 37948.465: 99.2136% ( 6) 00:11:27.103 37948.465 - 38198.126: 99.2647% ( 5) 00:11:27.103 38198.126 - 38447.787: 99.3260% ( 6) 00:11:27.103 38447.787 - 38697.448: 99.3464% ( 2) 00:11:27.104 43940.328 - 44189.989: 99.3566% ( 1) 00:11:27.104 44189.989 - 44439.650: 99.4179% ( 6) 00:11:27.104 44439.650 - 44689.310: 99.4792% ( 6) 00:11:27.104 44689.310 - 44938.971: 99.5200% ( 4) 00:11:27.104 44938.971 - 45188.632: 99.5813% ( 6) 00:11:27.104 45188.632 - 45438.293: 99.6426% ( 6) 00:11:27.104 45438.293 - 45687.954: 99.7038% ( 6) 00:11:27.104 45687.954 - 45937.615: 99.7651% ( 6) 00:11:27.104 45937.615 - 46187.276: 99.8264% ( 6) 00:11:27.104 46187.276 - 46436.937: 99.8877% ( 6) 00:11:27.104 46436.937 - 46686.598: 99.9387% ( 5) 00:11:27.104 46686.598 - 46936.259: 100.0000% ( 6) 00:11:27.104 00:11:27.104 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:27.104 ============================================================================== 00:11:27.104 Range in us Cumulative IO count 00:11:27.104 9799.192 - 9861.608: 0.0408% ( 4) 00:11:27.104 9861.608 - 9924.023: 0.1021% ( 6) 00:11:27.104 9924.023 - 9986.438: 0.1634% ( 6) 00:11:27.104 9986.438 - 10048.853: 0.3166% ( 15) 00:11:27.104 10048.853 - 10111.269: 0.4493% ( 13) 00:11:27.104 10111.269 - 10173.684: 0.6536% ( 20) 00:11:27.104 10173.684 - 10236.099: 0.8374% ( 18) 00:11:27.104 10236.099 - 10298.514: 0.9906% ( 15) 00:11:27.104 10298.514 - 10360.930: 1.0927% ( 10) 00:11:27.104 10360.930 - 10423.345: 1.2051% ( 11) 00:11:27.104 10423.345 - 10485.760: 1.3174% ( 11) 00:11:27.104 10485.760 - 10548.175: 1.4502% ( 13) 00:11:27.104 10548.175 - 10610.590: 1.5727% ( 12) 00:11:27.104 10610.590 - 10673.006: 1.7157% ( 14) 00:11:27.104 10673.006 - 10735.421: 1.9506% ( 23) 00:11:27.104 10735.421 - 10797.836: 2.2365% ( 28) 00:11:27.104 10797.836 - 10860.251: 2.5633% ( 32) 00:11:27.104 10860.251 - 10922.667: 2.7471% ( 18) 00:11:27.104 10922.667 - 10985.082: 2.9208% ( 17) 00:11:27.104 10985.082 - 11047.497: 3.3292% ( 40) 00:11:27.104 11047.497 - 11109.912: 3.9726% ( 63) 00:11:27.104 11109.912 - 11172.328: 4.6569% ( 67) 00:11:27.104 11172.328 - 11234.743: 5.3922% ( 72) 00:11:27.104 11234.743 - 11297.158: 5.9436% ( 54) 00:11:27.104 11297.158 - 11359.573: 6.4747% ( 52) 00:11:27.104 11359.573 - 11421.989: 7.1998% ( 71) 00:11:27.104 11421.989 - 11484.404: 7.9350% ( 72) 00:11:27.104 11484.404 - 11546.819: 8.9869% ( 103) 00:11:27.104 11546.819 - 11609.234: 10.1511% ( 114) 00:11:27.104 11609.234 - 11671.650: 11.5298% ( 135) 00:11:27.104 11671.650 - 11734.065: 12.9902% ( 143) 00:11:27.104 11734.065 - 11796.480: 14.6344% ( 161) 00:11:27.104 11796.480 - 11858.895: 16.2173% ( 155) 00:11:27.104 11858.895 - 11921.310: 17.8717% ( 162) 00:11:27.104 11921.310 - 11983.726: 19.6589% ( 175) 00:11:27.104 11983.726 - 12046.141: 21.4257% ( 173) 00:11:27.104 12046.141 - 12108.556: 23.4273% ( 196) 00:11:27.104 12108.556 - 12170.971: 25.3472% ( 188) 00:11:27.104 12170.971 - 12233.387: 27.6961% ( 230) 00:11:27.104 12233.387 - 12295.802: 30.1675% ( 242) 00:11:27.104 12295.802 - 12358.217: 33.1189% ( 289) 00:11:27.104 12358.217 - 12420.632: 36.0805% ( 290) 00:11:27.104 12420.632 - 12483.048: 38.9195% ( 278) 00:11:27.104 12483.048 - 12545.463: 41.8913% ( 291) 00:11:27.104 12545.463 - 12607.878: 44.6691% ( 272) 00:11:27.104 12607.878 - 12670.293: 47.4060% ( 268) 00:11:27.104 12670.293 - 12732.709: 50.2247% ( 276) 00:11:27.104 12732.709 - 12795.124: 53.1863% ( 290) 00:11:27.104 12795.124 - 12857.539: 56.0458% ( 280) 00:11:27.104 12857.539 - 12919.954: 58.6193% ( 252) 00:11:27.104 12919.954 - 12982.370: 61.1213% ( 245) 00:11:27.104 12982.370 - 13044.785: 63.5621% ( 239) 00:11:27.104 13044.785 - 13107.200: 66.0743% ( 246) 00:11:27.104 13107.200 - 13169.615: 68.5458% ( 242) 00:11:27.104 13169.615 - 13232.030: 71.0376% ( 244) 00:11:27.104 13232.030 - 13294.446: 73.5396% ( 245) 00:11:27.104 13294.446 - 13356.861: 75.7659% ( 218) 00:11:27.104 13356.861 - 13419.276: 77.8799% ( 207) 00:11:27.104 13419.276 - 13481.691: 79.6467% ( 173) 00:11:27.104 13481.691 - 13544.107: 81.1989% ( 152) 00:11:27.104 13544.107 - 13606.522: 82.6695% ( 144) 00:11:27.104 13606.522 - 13668.937: 84.0380% ( 134) 00:11:27.104 13668.937 - 13731.352: 85.3656% ( 130) 00:11:27.104 13731.352 - 13793.768: 86.4992% ( 111) 00:11:27.104 13793.768 - 13856.183: 87.5817% ( 106) 00:11:27.104 13856.183 - 13918.598: 88.5110% ( 91) 00:11:27.104 13918.598 - 13981.013: 89.2055% ( 68) 00:11:27.104 13981.013 - 14043.429: 89.8591% ( 64) 00:11:27.104 14043.429 - 14105.844: 90.3799% ( 51) 00:11:27.104 14105.844 - 14168.259: 90.8803% ( 49) 00:11:27.104 14168.259 - 14230.674: 91.2582% ( 37) 00:11:27.104 14230.674 - 14293.090: 91.6360% ( 37) 00:11:27.104 14293.090 - 14355.505: 91.9526% ( 31) 00:11:27.104 14355.505 - 14417.920: 92.2181% ( 26) 00:11:27.104 14417.920 - 14480.335: 92.5654% ( 34) 00:11:27.104 14480.335 - 14542.750: 92.9330% ( 36) 00:11:27.104 14542.750 - 14605.166: 93.1679% ( 23) 00:11:27.104 14605.166 - 14667.581: 93.3824% ( 21) 00:11:27.104 14667.581 - 14729.996: 93.6581% ( 27) 00:11:27.104 14729.996 - 14792.411: 94.0257% ( 36) 00:11:27.104 14792.411 - 14854.827: 94.4853% ( 45) 00:11:27.104 14854.827 - 14917.242: 94.8938% ( 40) 00:11:27.104 14917.242 - 14979.657: 95.2819% ( 38) 00:11:27.104 14979.657 - 15042.072: 95.6393% ( 35) 00:11:27.104 15042.072 - 15104.488: 95.9150% ( 27) 00:11:27.104 15104.488 - 15166.903: 96.1806% ( 26) 00:11:27.104 15166.903 - 15229.318: 96.3950% ( 21) 00:11:27.104 15229.318 - 15291.733: 96.6401% ( 24) 00:11:27.104 15291.733 - 15354.149: 96.8954% ( 25) 00:11:27.104 15354.149 - 15416.564: 97.1201% ( 22) 00:11:27.104 15416.564 - 15478.979: 97.3652% ( 24) 00:11:27.104 15478.979 - 15541.394: 97.5899% ( 22) 00:11:27.104 15541.394 - 15603.810: 97.8145% ( 22) 00:11:27.104 15603.810 - 15666.225: 98.0392% ( 22) 00:11:27.104 15666.225 - 15728.640: 98.2435% ( 20) 00:11:27.104 15728.640 - 15791.055: 98.4375% ( 19) 00:11:27.104 15791.055 - 15853.470: 98.5703% ( 13) 00:11:27.104 15853.470 - 15915.886: 98.6826% ( 11) 00:11:27.104 15915.886 - 15978.301: 98.6928% ( 1) 00:11:27.104 32955.246 - 33204.907: 98.7030% ( 1) 00:11:27.104 33204.907 - 33454.568: 98.7643% ( 6) 00:11:27.104 33454.568 - 33704.229: 98.8256% ( 6) 00:11:27.104 33704.229 - 33953.890: 98.8868% ( 6) 00:11:27.104 33953.890 - 34203.550: 98.9481% ( 6) 00:11:27.104 34203.550 - 34453.211: 98.9992% ( 5) 00:11:27.104 34453.211 - 34702.872: 99.0605% ( 6) 00:11:27.104 34702.872 - 34952.533: 99.1217% ( 6) 00:11:27.104 34952.533 - 35202.194: 99.1830% ( 6) 00:11:27.104 35202.194 - 35451.855: 99.2341% ( 5) 00:11:27.104 35451.855 - 35701.516: 99.2953% ( 6) 00:11:27.104 35701.516 - 35951.177: 99.3464% ( 5) 00:11:27.104 41443.718 - 41693.379: 99.3770% ( 3) 00:11:27.104 41693.379 - 41943.040: 99.4383% ( 6) 00:11:27.104 41943.040 - 42192.701: 99.4996% ( 6) 00:11:27.104 42192.701 - 42442.362: 99.5507% ( 5) 00:11:27.104 42442.362 - 42692.023: 99.6119% ( 6) 00:11:27.104 42692.023 - 42941.684: 99.6630% ( 5) 00:11:27.105 42941.684 - 43191.345: 99.7141% ( 5) 00:11:27.105 43191.345 - 43441.006: 99.7753% ( 6) 00:11:27.105 43441.006 - 43690.667: 99.8366% ( 6) 00:11:27.105 43690.667 - 43940.328: 99.8979% ( 6) 00:11:27.105 43940.328 - 44189.989: 99.9592% ( 6) 00:11:27.105 44189.989 - 44439.650: 100.0000% ( 4) 00:11:27.105 00:11:27.105 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:27.105 ============================================================================== 00:11:27.105 Range in us Cumulative IO count 00:11:27.105 9799.192 - 9861.608: 0.0102% ( 1) 00:11:27.105 9861.608 - 9924.023: 0.0715% ( 6) 00:11:27.105 9924.023 - 9986.438: 0.1838% ( 11) 00:11:27.105 9986.438 - 10048.853: 0.3268% ( 14) 00:11:27.105 10048.853 - 10111.269: 0.4596% ( 13) 00:11:27.105 10111.269 - 10173.684: 0.5719% ( 11) 00:11:27.105 10173.684 - 10236.099: 0.6127% ( 4) 00:11:27.105 10236.099 - 10298.514: 0.7353% ( 12) 00:11:27.105 10298.514 - 10360.930: 0.8272% ( 9) 00:11:27.105 10360.930 - 10423.345: 0.9498% ( 12) 00:11:27.105 10423.345 - 10485.760: 1.1029% ( 15) 00:11:27.105 10485.760 - 10548.175: 1.2357% ( 13) 00:11:27.105 10548.175 - 10610.590: 1.3889% ( 15) 00:11:27.105 10610.590 - 10673.006: 1.6136% ( 22) 00:11:27.105 10673.006 - 10735.421: 1.8382% ( 22) 00:11:27.105 10735.421 - 10797.836: 2.2263% ( 38) 00:11:27.105 10797.836 - 10860.251: 2.4918% ( 26) 00:11:27.105 10860.251 - 10922.667: 2.7165% ( 22) 00:11:27.105 10922.667 - 10985.082: 2.9820% ( 26) 00:11:27.105 10985.082 - 11047.497: 3.3088% ( 32) 00:11:27.105 11047.497 - 11109.912: 3.7786% ( 46) 00:11:27.105 11109.912 - 11172.328: 4.4526% ( 66) 00:11:27.105 11172.328 - 11234.743: 5.0858% ( 62) 00:11:27.105 11234.743 - 11297.158: 5.7496% ( 65) 00:11:27.105 11297.158 - 11359.573: 6.4849% ( 72) 00:11:27.105 11359.573 - 11421.989: 7.1895% ( 69) 00:11:27.105 11421.989 - 11484.404: 7.9350% ( 73) 00:11:27.105 11484.404 - 11546.819: 9.0278% ( 107) 00:11:27.105 11546.819 - 11609.234: 10.1920% ( 114) 00:11:27.105 11609.234 - 11671.650: 11.4685% ( 125) 00:11:27.105 11671.650 - 11734.065: 13.0719% ( 157) 00:11:27.105 11734.065 - 11796.480: 14.6038% ( 150) 00:11:27.105 11796.480 - 11858.895: 16.3603% ( 172) 00:11:27.105 11858.895 - 11921.310: 18.0556% ( 166) 00:11:27.105 11921.310 - 11983.726: 19.8427% ( 175) 00:11:27.105 11983.726 - 12046.141: 21.7422% ( 186) 00:11:27.105 12046.141 - 12108.556: 23.5703% ( 179) 00:11:27.105 12108.556 - 12170.971: 25.6025% ( 199) 00:11:27.105 12170.971 - 12233.387: 27.9208% ( 227) 00:11:27.105 12233.387 - 12295.802: 30.6168% ( 264) 00:11:27.105 12295.802 - 12358.217: 33.6091% ( 293) 00:11:27.105 12358.217 - 12420.632: 36.5094% ( 284) 00:11:27.105 12420.632 - 12483.048: 39.3178% ( 275) 00:11:27.105 12483.048 - 12545.463: 41.9833% ( 261) 00:11:27.105 12545.463 - 12607.878: 44.4955% ( 246) 00:11:27.105 12607.878 - 12670.293: 47.0486% ( 250) 00:11:27.105 12670.293 - 12732.709: 49.5813% ( 248) 00:11:27.105 12732.709 - 12795.124: 52.3386% ( 270) 00:11:27.105 12795.124 - 12857.539: 55.1777% ( 278) 00:11:27.105 12857.539 - 12919.954: 58.2108% ( 297) 00:11:27.105 12919.954 - 12982.370: 60.9579% ( 269) 00:11:27.105 12982.370 - 13044.785: 63.5621% ( 255) 00:11:27.105 13044.785 - 13107.200: 65.9722% ( 236) 00:11:27.105 13107.200 - 13169.615: 68.3415% ( 232) 00:11:27.105 13169.615 - 13232.030: 70.7108% ( 232) 00:11:27.105 13232.030 - 13294.446: 73.1107% ( 235) 00:11:27.105 13294.446 - 13356.861: 75.3370% ( 218) 00:11:27.105 13356.861 - 13419.276: 77.3795% ( 200) 00:11:27.105 13419.276 - 13481.691: 79.2075% ( 179) 00:11:27.105 13481.691 - 13544.107: 80.8313% ( 159) 00:11:27.105 13544.107 - 13606.522: 82.3121% ( 145) 00:11:27.105 13606.522 - 13668.937: 83.7010% ( 136) 00:11:27.105 13668.937 - 13731.352: 85.0388% ( 131) 00:11:27.105 13731.352 - 13793.768: 86.3460% ( 128) 00:11:27.105 13793.768 - 13856.183: 87.3775% ( 101) 00:11:27.105 13856.183 - 13918.598: 88.2149% ( 82) 00:11:27.105 13918.598 - 13981.013: 88.9808% ( 75) 00:11:27.105 13981.013 - 14043.429: 89.6344% ( 64) 00:11:27.105 14043.429 - 14105.844: 90.1757% ( 53) 00:11:27.105 14105.844 - 14168.259: 90.7578% ( 57) 00:11:27.105 14168.259 - 14230.674: 91.2684% ( 50) 00:11:27.105 14230.674 - 14293.090: 91.6565% ( 38) 00:11:27.105 14293.090 - 14355.505: 91.8811% ( 22) 00:11:27.105 14355.505 - 14417.920: 92.1364% ( 25) 00:11:27.105 14417.920 - 14480.335: 92.4122% ( 27) 00:11:27.105 14480.335 - 14542.750: 92.6777% ( 26) 00:11:27.105 14542.750 - 14605.166: 92.9534% ( 27) 00:11:27.105 14605.166 - 14667.581: 93.3007% ( 34) 00:11:27.105 14667.581 - 14729.996: 93.7194% ( 41) 00:11:27.105 14729.996 - 14792.411: 93.9849% ( 26) 00:11:27.105 14792.411 - 14854.827: 94.3219% ( 33) 00:11:27.105 14854.827 - 14917.242: 94.7712% ( 44) 00:11:27.105 14917.242 - 14979.657: 95.2104% ( 43) 00:11:27.105 14979.657 - 15042.072: 95.5882% ( 37) 00:11:27.105 15042.072 - 15104.488: 95.9355% ( 34) 00:11:27.105 15104.488 - 15166.903: 96.2418% ( 30) 00:11:27.105 15166.903 - 15229.318: 96.4665% ( 22) 00:11:27.105 15229.318 - 15291.733: 96.7218% ( 25) 00:11:27.105 15291.733 - 15354.149: 96.9567% ( 23) 00:11:27.105 15354.149 - 15416.564: 97.1712% ( 21) 00:11:27.105 15416.564 - 15478.979: 97.3856% ( 21) 00:11:27.105 15478.979 - 15541.394: 97.6205% ( 23) 00:11:27.105 15541.394 - 15603.810: 97.8248% ( 20) 00:11:27.105 15603.810 - 15666.225: 98.0392% ( 21) 00:11:27.105 15666.225 - 15728.640: 98.2537% ( 21) 00:11:27.105 15728.640 - 15791.055: 98.4273% ( 17) 00:11:27.105 15791.055 - 15853.470: 98.5600% ( 13) 00:11:27.105 15853.470 - 15915.886: 98.6213% ( 6) 00:11:27.105 15915.886 - 15978.301: 98.6520% ( 3) 00:11:27.105 15978.301 - 16103.131: 98.6928% ( 4) 00:11:27.105 29834.484 - 29959.314: 98.7030% ( 1) 00:11:27.105 29959.314 - 30084.145: 98.7337% ( 3) 00:11:27.105 30084.145 - 30208.975: 98.7643% ( 3) 00:11:27.105 30208.975 - 30333.806: 98.7949% ( 3) 00:11:27.105 30333.806 - 30458.636: 98.8256% ( 3) 00:11:27.105 30458.636 - 30583.467: 98.8562% ( 3) 00:11:27.105 30583.467 - 30708.297: 98.8971% ( 4) 00:11:27.105 30708.297 - 30833.128: 98.9277% ( 3) 00:11:27.105 30833.128 - 30957.958: 98.9583% ( 3) 00:11:27.105 30957.958 - 31082.789: 98.9890% ( 3) 00:11:27.105 31082.789 - 31207.619: 99.0196% ( 3) 00:11:27.105 31207.619 - 31332.450: 99.0502% ( 3) 00:11:27.105 31332.450 - 31457.280: 99.0809% ( 3) 00:11:27.105 31457.280 - 31582.110: 99.1115% ( 3) 00:11:27.105 31582.110 - 31706.941: 99.1422% ( 3) 00:11:27.105 31706.941 - 31831.771: 99.1830% ( 4) 00:11:27.105 31831.771 - 31956.602: 99.2136% ( 3) 00:11:27.105 31956.602 - 32206.263: 99.2647% ( 5) 00:11:27.105 32206.263 - 32455.924: 99.3260% ( 6) 00:11:27.105 32455.924 - 32705.585: 99.3464% ( 2) 00:11:27.105 38198.126 - 38447.787: 99.3566% ( 1) 00:11:27.105 38447.787 - 38697.448: 99.4179% ( 6) 00:11:27.105 38697.448 - 38947.109: 99.4792% ( 6) 00:11:27.105 38947.109 - 39196.770: 99.5404% ( 6) 00:11:27.105 39196.770 - 39446.430: 99.6017% ( 6) 00:11:27.105 39446.430 - 39696.091: 99.6630% ( 6) 00:11:27.105 39696.091 - 39945.752: 99.7141% ( 5) 00:11:27.105 39945.752 - 40195.413: 99.7855% ( 7) 00:11:27.106 40195.413 - 40445.074: 99.8468% ( 6) 00:11:27.106 40445.074 - 40694.735: 99.9081% ( 6) 00:11:27.106 40694.735 - 40944.396: 99.9694% ( 6) 00:11:27.106 40944.396 - 41194.057: 100.0000% ( 3) 00:11:27.106 00:11:27.106 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:27.106 ============================================================================== 00:11:27.106 Range in us Cumulative IO count 00:11:27.106 9736.777 - 9799.192: 0.0102% ( 1) 00:11:27.106 9861.608 - 9924.023: 0.0204% ( 1) 00:11:27.106 9924.023 - 9986.438: 0.1328% ( 11) 00:11:27.106 9986.438 - 10048.853: 0.2553% ( 12) 00:11:27.106 10048.853 - 10111.269: 0.3779% ( 12) 00:11:27.106 10111.269 - 10173.684: 0.5208% ( 14) 00:11:27.106 10173.684 - 10236.099: 0.6230% ( 10) 00:11:27.106 10236.099 - 10298.514: 0.7659% ( 14) 00:11:27.106 10298.514 - 10360.930: 0.8885% ( 12) 00:11:27.106 10360.930 - 10423.345: 0.9906% ( 10) 00:11:27.106 10423.345 - 10485.760: 1.1029% ( 11) 00:11:27.106 10485.760 - 10548.175: 1.2766% ( 17) 00:11:27.106 10548.175 - 10610.590: 1.4400% ( 16) 00:11:27.106 10610.590 - 10673.006: 1.6033% ( 16) 00:11:27.106 10673.006 - 10735.421: 1.7463% ( 14) 00:11:27.106 10735.421 - 10797.836: 1.9608% ( 21) 00:11:27.106 10797.836 - 10860.251: 2.2467% ( 28) 00:11:27.106 10860.251 - 10922.667: 2.5531% ( 30) 00:11:27.106 10922.667 - 10985.082: 2.8595% ( 30) 00:11:27.106 10985.082 - 11047.497: 3.1556% ( 29) 00:11:27.106 11047.497 - 11109.912: 3.5846% ( 42) 00:11:27.106 11109.912 - 11172.328: 4.1871% ( 59) 00:11:27.106 11172.328 - 11234.743: 4.7181% ( 52) 00:11:27.106 11234.743 - 11297.158: 5.2492% ( 52) 00:11:27.106 11297.158 - 11359.573: 5.8109% ( 55) 00:11:27.106 11359.573 - 11421.989: 6.4747% ( 65) 00:11:27.106 11421.989 - 11484.404: 7.5266% ( 103) 00:11:27.106 11484.404 - 11546.819: 8.7520% ( 120) 00:11:27.106 11546.819 - 11609.234: 10.0797% ( 130) 00:11:27.106 11609.234 - 11671.650: 11.4685% ( 136) 00:11:27.106 11671.650 - 11734.065: 12.8472% ( 135) 00:11:27.106 11734.065 - 11796.480: 14.3587% ( 148) 00:11:27.106 11796.480 - 11858.895: 16.0743% ( 168) 00:11:27.106 11858.895 - 11921.310: 17.6675% ( 156) 00:11:27.106 11921.310 - 11983.726: 19.3117% ( 161) 00:11:27.106 11983.726 - 12046.141: 21.3440% ( 199) 00:11:27.106 12046.141 - 12108.556: 23.5907% ( 220) 00:11:27.106 12108.556 - 12170.971: 25.7761% ( 214) 00:11:27.106 12170.971 - 12233.387: 28.3292% ( 250) 00:11:27.106 12233.387 - 12295.802: 30.8517% ( 247) 00:11:27.106 12295.802 - 12358.217: 33.7214% ( 281) 00:11:27.106 12358.217 - 12420.632: 36.5911% ( 281) 00:11:27.106 12420.632 - 12483.048: 39.3995% ( 275) 00:11:27.106 12483.048 - 12545.463: 42.1773% ( 272) 00:11:27.106 12545.463 - 12607.878: 44.9653% ( 273) 00:11:27.106 12607.878 - 12670.293: 47.4877% ( 247) 00:11:27.106 12670.293 - 12732.709: 49.8264% ( 229) 00:11:27.106 12732.709 - 12795.124: 52.5633% ( 268) 00:11:27.106 12795.124 - 12857.539: 55.4024% ( 278) 00:11:27.106 12857.539 - 12919.954: 58.4048% ( 294) 00:11:27.106 12919.954 - 12982.370: 61.2541% ( 279) 00:11:27.106 12982.370 - 13044.785: 63.9604% ( 265) 00:11:27.106 13044.785 - 13107.200: 66.4828% ( 247) 00:11:27.106 13107.200 - 13169.615: 68.9134% ( 238) 00:11:27.106 13169.615 - 13232.030: 71.2214% ( 226) 00:11:27.106 13232.030 - 13294.446: 73.4273% ( 216) 00:11:27.106 13294.446 - 13356.861: 75.6127% ( 214) 00:11:27.106 13356.861 - 13419.276: 77.5633% ( 191) 00:11:27.106 13419.276 - 13481.691: 79.1871% ( 159) 00:11:27.106 13481.691 - 13544.107: 80.8415% ( 162) 00:11:27.106 13544.107 - 13606.522: 82.4346% ( 156) 00:11:27.106 13606.522 - 13668.937: 83.8848% ( 142) 00:11:27.106 13668.937 - 13731.352: 85.1511% ( 124) 00:11:27.106 13731.352 - 13793.768: 86.4175% ( 124) 00:11:27.106 13793.768 - 13856.183: 87.4183% ( 98) 00:11:27.106 13856.183 - 13918.598: 88.2149% ( 78) 00:11:27.106 13918.598 - 13981.013: 88.7766% ( 55) 00:11:27.106 13981.013 - 14043.429: 89.3484% ( 56) 00:11:27.106 14043.429 - 14105.844: 89.9612% ( 60) 00:11:27.106 14105.844 - 14168.259: 90.5127% ( 54) 00:11:27.106 14168.259 - 14230.674: 90.9824% ( 46) 00:11:27.106 14230.674 - 14293.090: 91.4011% ( 41) 00:11:27.106 14293.090 - 14355.505: 91.6769% ( 27) 00:11:27.106 14355.505 - 14417.920: 91.9016% ( 22) 00:11:27.106 14417.920 - 14480.335: 92.1058% ( 20) 00:11:27.106 14480.335 - 14542.750: 92.4428% ( 33) 00:11:27.106 14542.750 - 14605.166: 92.7696% ( 32) 00:11:27.106 14605.166 - 14667.581: 93.0862% ( 31) 00:11:27.106 14667.581 - 14729.996: 93.3619% ( 27) 00:11:27.106 14729.996 - 14792.411: 93.6785% ( 31) 00:11:27.106 14792.411 - 14854.827: 94.1176% ( 43) 00:11:27.106 14854.827 - 14917.242: 94.6385% ( 51) 00:11:27.106 14917.242 - 14979.657: 95.0163% ( 37) 00:11:27.106 14979.657 - 15042.072: 95.4453% ( 42) 00:11:27.106 15042.072 - 15104.488: 95.8435% ( 39) 00:11:27.106 15104.488 - 15166.903: 96.1703% ( 32) 00:11:27.106 15166.903 - 15229.318: 96.5278% ( 35) 00:11:27.106 15229.318 - 15291.733: 96.7525% ( 22) 00:11:27.106 15291.733 - 15354.149: 96.9771% ( 22) 00:11:27.106 15354.149 - 15416.564: 97.2018% ( 22) 00:11:27.106 15416.564 - 15478.979: 97.4163% ( 21) 00:11:27.106 15478.979 - 15541.394: 97.6103% ( 19) 00:11:27.106 15541.394 - 15603.810: 97.7941% ( 18) 00:11:27.106 15603.810 - 15666.225: 97.9677% ( 17) 00:11:27.106 15666.225 - 15728.640: 98.1516% ( 18) 00:11:27.106 15728.640 - 15791.055: 98.3047% ( 15) 00:11:27.106 15791.055 - 15853.470: 98.4375% ( 13) 00:11:27.106 15853.470 - 15915.886: 98.5498% ( 11) 00:11:27.106 15915.886 - 15978.301: 98.5805% ( 3) 00:11:27.106 15978.301 - 16103.131: 98.6213% ( 4) 00:11:27.106 16103.131 - 16227.962: 98.6622% ( 4) 00:11:27.106 16227.962 - 16352.792: 98.6928% ( 3) 00:11:27.106 26713.722 - 26838.552: 98.7030% ( 1) 00:11:27.106 26838.552 - 26963.383: 98.7337% ( 3) 00:11:27.106 26963.383 - 27088.213: 98.7643% ( 3) 00:11:27.106 27088.213 - 27213.044: 98.7949% ( 3) 00:11:27.106 27213.044 - 27337.874: 98.8256% ( 3) 00:11:27.106 27337.874 - 27462.705: 98.8664% ( 4) 00:11:27.106 27462.705 - 27587.535: 98.8971% ( 3) 00:11:27.106 27587.535 - 27712.366: 98.9277% ( 3) 00:11:27.106 27712.366 - 27837.196: 98.9583% ( 3) 00:11:27.106 27837.196 - 27962.027: 98.9890% ( 3) 00:11:27.106 27962.027 - 28086.857: 99.0196% ( 3) 00:11:27.106 28086.857 - 28211.688: 99.0502% ( 3) 00:11:27.106 28211.688 - 28336.518: 99.0911% ( 4) 00:11:27.106 28336.518 - 28461.349: 99.1217% ( 3) 00:11:27.106 28461.349 - 28586.179: 99.1524% ( 3) 00:11:27.106 28586.179 - 28711.010: 99.1932% ( 4) 00:11:27.106 28711.010 - 28835.840: 99.2136% ( 2) 00:11:27.106 28835.840 - 28960.670: 99.2443% ( 3) 00:11:27.106 28960.670 - 29085.501: 99.2749% ( 3) 00:11:27.106 29085.501 - 29210.331: 99.3056% ( 3) 00:11:27.106 29210.331 - 29335.162: 99.3362% ( 3) 00:11:27.106 29335.162 - 29459.992: 99.3464% ( 1) 00:11:27.106 35451.855 - 35701.516: 99.3873% ( 4) 00:11:27.106 35701.516 - 35951.177: 99.4587% ( 7) 00:11:27.106 35951.177 - 36200.838: 99.5098% ( 5) 00:11:27.106 36200.838 - 36450.499: 99.5711% ( 6) 00:11:27.106 36450.499 - 36700.160: 99.6221% ( 5) 00:11:27.106 36700.160 - 36949.821: 99.6834% ( 6) 00:11:27.106 36949.821 - 37199.482: 99.7345% ( 5) 00:11:27.106 37199.482 - 37449.143: 99.7855% ( 5) 00:11:27.106 37449.143 - 37698.804: 99.8468% ( 6) 00:11:27.106 37698.804 - 37948.465: 99.9081% ( 6) 00:11:27.106 37948.465 - 38198.126: 99.9592% ( 5) 00:11:27.106 38198.126 - 38447.787: 100.0000% ( 4) 00:11:27.106 00:11:27.106 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:27.106 ============================================================================== 00:11:27.107 Range in us Cumulative IO count 00:11:27.107 9861.608 - 9924.023: 0.0203% ( 2) 00:11:27.107 9924.023 - 9986.438: 0.1623% ( 14) 00:11:27.107 9986.438 - 10048.853: 0.2841% ( 12) 00:11:27.107 10048.853 - 10111.269: 0.3957% ( 11) 00:11:27.107 10111.269 - 10173.684: 0.5073% ( 11) 00:11:27.107 10173.684 - 10236.099: 0.6088% ( 10) 00:11:27.107 10236.099 - 10298.514: 0.7407% ( 13) 00:11:27.107 10298.514 - 10360.930: 0.8624% ( 12) 00:11:27.107 10360.930 - 10423.345: 0.9740% ( 11) 00:11:27.107 10423.345 - 10485.760: 1.0856% ( 11) 00:11:27.107 10485.760 - 10548.175: 1.2480% ( 16) 00:11:27.107 10548.175 - 10610.590: 1.4205% ( 17) 00:11:27.107 10610.590 - 10673.006: 1.6640% ( 24) 00:11:27.107 10673.006 - 10735.421: 1.7959% ( 13) 00:11:27.107 10735.421 - 10797.836: 1.9582% ( 16) 00:11:27.107 10797.836 - 10860.251: 2.1408% ( 18) 00:11:27.107 10860.251 - 10922.667: 2.3945% ( 25) 00:11:27.107 10922.667 - 10985.082: 2.6684% ( 27) 00:11:27.107 10985.082 - 11047.497: 2.9728% ( 30) 00:11:27.107 11047.497 - 11109.912: 3.3584% ( 38) 00:11:27.107 11109.912 - 11172.328: 3.8454% ( 48) 00:11:27.107 11172.328 - 11234.743: 4.2918% ( 44) 00:11:27.107 11234.743 - 11297.158: 4.9006% ( 60) 00:11:27.107 11297.158 - 11359.573: 5.6717% ( 76) 00:11:27.107 11359.573 - 11421.989: 6.7269% ( 104) 00:11:27.107 11421.989 - 11484.404: 7.7313% ( 99) 00:11:27.107 11484.404 - 11546.819: 8.9692% ( 122) 00:11:27.107 11546.819 - 11609.234: 10.2171% ( 123) 00:11:27.107 11609.234 - 11671.650: 11.6680% ( 143) 00:11:27.107 11671.650 - 11734.065: 13.0377% ( 135) 00:11:27.107 11734.065 - 11796.480: 14.3364% ( 128) 00:11:27.107 11796.480 - 11858.895: 15.9294% ( 157) 00:11:27.107 11858.895 - 11921.310: 17.5832% ( 163) 00:11:27.107 11921.310 - 11983.726: 19.3689% ( 176) 00:11:27.107 11983.726 - 12046.141: 21.4083% ( 201) 00:11:27.107 12046.141 - 12108.556: 23.4578% ( 202) 00:11:27.107 12108.556 - 12170.971: 25.9233% ( 243) 00:11:27.107 12170.971 - 12233.387: 28.2468% ( 229) 00:11:27.107 12233.387 - 12295.802: 31.2094% ( 292) 00:11:27.107 12295.802 - 12358.217: 33.9590% ( 271) 00:11:27.107 12358.217 - 12420.632: 36.6274% ( 263) 00:11:27.107 12420.632 - 12483.048: 39.2045% ( 254) 00:11:27.107 12483.048 - 12545.463: 41.6396% ( 240) 00:11:27.107 12545.463 - 12607.878: 44.3791% ( 270) 00:11:27.107 12607.878 - 12670.293: 46.8851% ( 247) 00:11:27.107 12670.293 - 12732.709: 49.5637% ( 264) 00:11:27.107 12732.709 - 12795.124: 52.4858% ( 288) 00:11:27.107 12795.124 - 12857.539: 55.3470% ( 282) 00:11:27.107 12857.539 - 12919.954: 58.4111% ( 302) 00:11:27.107 12919.954 - 12982.370: 61.1201% ( 267) 00:11:27.107 12982.370 - 13044.785: 63.8596% ( 270) 00:11:27.107 13044.785 - 13107.200: 66.2642% ( 237) 00:11:27.107 13107.200 - 13169.615: 68.5369% ( 224) 00:11:27.107 13169.615 - 13232.030: 70.8604% ( 229) 00:11:27.107 13232.030 - 13294.446: 73.1433% ( 225) 00:11:27.107 13294.446 - 13356.861: 75.3247% ( 215) 00:11:27.107 13356.861 - 13419.276: 77.2727% ( 192) 00:11:27.107 13419.276 - 13481.691: 78.9976% ( 170) 00:11:27.107 13481.691 - 13544.107: 80.6006% ( 158) 00:11:27.107 13544.107 - 13606.522: 82.1936% ( 157) 00:11:27.107 13606.522 - 13668.937: 83.7561% ( 154) 00:11:27.107 13668.937 - 13731.352: 85.1157% ( 134) 00:11:27.107 13731.352 - 13793.768: 86.2926% ( 116) 00:11:27.107 13793.768 - 13856.183: 87.3377% ( 103) 00:11:27.107 13856.183 - 13918.598: 88.1595% ( 81) 00:11:27.107 13918.598 - 13981.013: 88.8799% ( 71) 00:11:27.107 13981.013 - 14043.429: 89.4075% ( 52) 00:11:27.107 14043.429 - 14105.844: 89.8945% ( 48) 00:11:27.107 14105.844 - 14168.259: 90.3713% ( 47) 00:11:27.107 14168.259 - 14230.674: 90.8482% ( 47) 00:11:27.107 14230.674 - 14293.090: 91.2338% ( 38) 00:11:27.107 14293.090 - 14355.505: 91.5584% ( 32) 00:11:27.107 14355.505 - 14417.920: 91.8628% ( 30) 00:11:27.107 14417.920 - 14480.335: 92.2078% ( 34) 00:11:27.107 14480.335 - 14542.750: 92.5223% ( 31) 00:11:27.107 14542.750 - 14605.166: 92.8267% ( 30) 00:11:27.107 14605.166 - 14667.581: 93.2021% ( 37) 00:11:27.107 14667.581 - 14729.996: 93.5369% ( 33) 00:11:27.107 14729.996 - 14792.411: 93.8515% ( 31) 00:11:27.107 14792.411 - 14854.827: 94.2167% ( 36) 00:11:27.107 14854.827 - 14917.242: 94.5820% ( 36) 00:11:27.107 14917.242 - 14979.657: 95.0588% ( 47) 00:11:27.107 14979.657 - 15042.072: 95.5053% ( 44) 00:11:27.107 15042.072 - 15104.488: 95.8807% ( 37) 00:11:27.107 15104.488 - 15166.903: 96.1851% ( 30) 00:11:27.107 15166.903 - 15229.318: 96.4996% ( 31) 00:11:27.107 15229.318 - 15291.733: 96.8243% ( 32) 00:11:27.107 15291.733 - 15354.149: 97.0475% ( 22) 00:11:27.107 15354.149 - 15416.564: 97.2606% ( 21) 00:11:27.107 15416.564 - 15478.979: 97.4939% ( 23) 00:11:27.107 15478.979 - 15541.394: 97.6562% ( 16) 00:11:27.107 15541.394 - 15603.810: 97.8287% ( 17) 00:11:27.107 15603.810 - 15666.225: 97.9403% ( 11) 00:11:27.107 15666.225 - 15728.640: 98.0519% ( 11) 00:11:27.107 15728.640 - 15791.055: 98.1737% ( 12) 00:11:27.107 15791.055 - 15853.470: 98.2752% ( 10) 00:11:27.107 15853.470 - 15915.886: 98.3462% ( 7) 00:11:27.107 15915.886 - 15978.301: 98.3969% ( 5) 00:11:27.107 15978.301 - 16103.131: 98.4984% ( 10) 00:11:27.107 16103.131 - 16227.962: 98.5390% ( 4) 00:11:27.107 16227.962 - 16352.792: 98.5897% ( 5) 00:11:27.107 16352.792 - 16477.623: 98.6303% ( 4) 00:11:27.107 16477.623 - 16602.453: 98.6810% ( 5) 00:11:27.108 16602.453 - 16727.284: 98.7013% ( 2) 00:11:27.108 17601.097 - 17725.928: 98.7216% ( 2) 00:11:27.108 17725.928 - 17850.758: 98.7520% ( 3) 00:11:27.108 17850.758 - 17975.589: 98.7825% ( 3) 00:11:27.108 17975.589 - 18100.419: 98.8129% ( 3) 00:11:27.108 18100.419 - 18225.250: 98.8433% ( 3) 00:11:27.108 18225.250 - 18350.080: 98.8738% ( 3) 00:11:27.108 18350.080 - 18474.910: 98.9042% ( 3) 00:11:27.108 18474.910 - 18599.741: 98.9347% ( 3) 00:11:27.108 18599.741 - 18724.571: 98.9651% ( 3) 00:11:27.108 18724.571 - 18849.402: 98.9955% ( 3) 00:11:27.108 18849.402 - 18974.232: 99.0260% ( 3) 00:11:27.108 18974.232 - 19099.063: 99.0463% ( 2) 00:11:27.108 19099.063 - 19223.893: 99.0869% ( 4) 00:11:27.108 19223.893 - 19348.724: 99.1173% ( 3) 00:11:27.108 19348.724 - 19473.554: 99.1477% ( 3) 00:11:27.108 19473.554 - 19598.385: 99.1782% ( 3) 00:11:27.108 19598.385 - 19723.215: 99.2188% ( 4) 00:11:27.108 19723.215 - 19848.046: 99.2390% ( 2) 00:11:27.108 19848.046 - 19972.876: 99.2695% ( 3) 00:11:27.108 19972.876 - 20097.707: 99.2999% ( 3) 00:11:27.108 20097.707 - 20222.537: 99.3304% ( 3) 00:11:27.108 20222.537 - 20347.368: 99.3506% ( 2) 00:11:27.108 25715.078 - 25839.909: 99.3811% ( 3) 00:11:27.108 25839.909 - 25964.739: 99.4115% ( 3) 00:11:27.108 25964.739 - 26089.570: 99.4420% ( 3) 00:11:27.108 26089.570 - 26214.400: 99.4825% ( 4) 00:11:27.108 26214.400 - 26339.230: 99.5028% ( 2) 00:11:27.108 26339.230 - 26464.061: 99.5333% ( 3) 00:11:27.108 26464.061 - 26588.891: 99.5637% ( 3) 00:11:27.108 26588.891 - 26713.722: 99.5942% ( 3) 00:11:27.108 26713.722 - 26838.552: 99.6246% ( 3) 00:11:27.108 26838.552 - 26963.383: 99.6550% ( 3) 00:11:27.108 26963.383 - 27088.213: 99.6753% ( 2) 00:11:27.108 27088.213 - 27213.044: 99.7058% ( 3) 00:11:27.108 27213.044 - 27337.874: 99.7362% ( 3) 00:11:27.108 27337.874 - 27462.705: 99.7666% ( 3) 00:11:27.108 27462.705 - 27587.535: 99.7971% ( 3) 00:11:27.108 27587.535 - 27712.366: 99.8275% ( 3) 00:11:27.108 27712.366 - 27837.196: 99.8681% ( 4) 00:11:27.108 27837.196 - 27962.027: 99.8884% ( 2) 00:11:27.108 27962.027 - 28086.857: 99.9188% ( 3) 00:11:27.108 28086.857 - 28211.688: 99.9493% ( 3) 00:11:27.108 28211.688 - 28336.518: 99.9797% ( 3) 00:11:27.108 28336.518 - 28461.349: 100.0000% ( 2) 00:11:27.108 00:11:27.108 19:33:17 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:27.108 00:11:27.108 real 0m2.850s 00:11:27.108 user 0m2.345s 00:11:27.108 sys 0m0.385s 00:11:27.108 19:33:17 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.108 19:33:17 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:27.108 ************************************ 00:11:27.108 END TEST nvme_perf 00:11:27.108 ************************************ 00:11:27.108 19:33:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:27.108 19:33:17 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:27.108 19:33:17 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:27.108 19:33:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.108 19:33:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.108 ************************************ 00:11:27.108 START TEST nvme_hello_world 00:11:27.108 ************************************ 00:11:27.108 19:33:17 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:27.367 Initializing NVMe Controllers 00:11:27.367 Attached to 0000:00:10.0 00:11:27.367 Namespace ID: 1 size: 6GB 00:11:27.367 Attached to 0000:00:11.0 00:11:27.367 Namespace ID: 1 size: 5GB 00:11:27.367 Attached to 0000:00:13.0 00:11:27.367 Namespace ID: 1 size: 1GB 00:11:27.367 Attached to 0000:00:12.0 00:11:27.367 Namespace ID: 1 size: 4GB 00:11:27.367 Namespace ID: 2 size: 4GB 00:11:27.367 Namespace ID: 3 size: 4GB 00:11:27.367 Initialization complete. 00:11:27.367 INFO: using host memory buffer for IO 00:11:27.367 Hello world! 00:11:27.367 INFO: using host memory buffer for IO 00:11:27.367 Hello world! 00:11:27.367 INFO: using host memory buffer for IO 00:11:27.367 Hello world! 00:11:27.367 INFO: using host memory buffer for IO 00:11:27.367 Hello world! 00:11:27.367 INFO: using host memory buffer for IO 00:11:27.367 Hello world! 00:11:27.367 INFO: using host memory buffer for IO 00:11:27.367 Hello world! 00:11:27.625 00:11:27.625 real 0m0.376s 00:11:27.625 user 0m0.155s 00:11:27.625 sys 0m0.166s 00:11:27.625 19:33:18 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.625 19:33:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 ************************************ 00:11:27.625 END TEST nvme_hello_world 00:11:27.625 ************************************ 00:11:27.625 19:33:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:27.625 19:33:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:27.625 19:33:18 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:27.625 19:33:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.625 19:33:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 ************************************ 00:11:27.625 START TEST nvme_sgl 00:11:27.625 ************************************ 00:11:27.625 19:33:18 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:27.884 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:27.884 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:27.884 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:27.884 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:27.884 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:27.884 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:27.884 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:27.884 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:27.884 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:27.884 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:27.884 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:27.884 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:27.884 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:27.884 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:28.143 NVMe Readv/Writev Request test 00:11:28.143 Attached to 0000:00:10.0 00:11:28.143 Attached to 0000:00:11.0 00:11:28.143 Attached to 0000:00:13.0 00:11:28.143 Attached to 0000:00:12.0 00:11:28.143 0000:00:10.0: build_io_request_2 test passed 00:11:28.143 0000:00:10.0: build_io_request_4 test passed 00:11:28.143 0000:00:10.0: build_io_request_5 test passed 00:11:28.143 0000:00:10.0: build_io_request_6 test passed 00:11:28.143 0000:00:10.0: build_io_request_7 test passed 00:11:28.143 0000:00:10.0: build_io_request_10 test passed 00:11:28.143 0000:00:11.0: build_io_request_2 test passed 00:11:28.143 0000:00:11.0: build_io_request_4 test passed 00:11:28.143 0000:00:11.0: build_io_request_5 test passed 00:11:28.143 0000:00:11.0: build_io_request_6 test passed 00:11:28.143 0000:00:11.0: build_io_request_7 test passed 00:11:28.143 0000:00:11.0: build_io_request_10 test passed 00:11:28.143 Cleaning up... 00:11:28.143 00:11:28.143 real 0m0.474s 00:11:28.143 user 0m0.239s 00:11:28.143 sys 0m0.186s 00:11:28.143 19:33:18 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.143 19:33:18 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:28.143 ************************************ 00:11:28.143 END TEST nvme_sgl 00:11:28.143 ************************************ 00:11:28.143 19:33:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:28.143 19:33:18 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:28.143 19:33:18 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:28.143 19:33:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.143 19:33:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.143 ************************************ 00:11:28.143 START TEST nvme_e2edp 00:11:28.143 ************************************ 00:11:28.143 19:33:18 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:28.402 NVMe Write/Read with End-to-End data protection test 00:11:28.402 Attached to 0000:00:10.0 00:11:28.402 Attached to 0000:00:11.0 00:11:28.402 Attached to 0000:00:13.0 00:11:28.402 Attached to 0000:00:12.0 00:11:28.402 Cleaning up... 00:11:28.402 00:11:28.402 real 0m0.325s 00:11:28.402 user 0m0.123s 00:11:28.402 sys 0m0.155s 00:11:28.402 19:33:19 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.402 ************************************ 00:11:28.402 END TEST nvme_e2edp 00:11:28.402 ************************************ 00:11:28.402 19:33:19 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 19:33:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:28.402 19:33:19 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:28.402 19:33:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:28.402 19:33:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.402 19:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.402 ************************************ 00:11:28.402 START TEST nvme_reserve 00:11:28.402 ************************************ 00:11:28.402 19:33:19 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:28.967 ===================================================== 00:11:28.968 NVMe Controller at PCI bus 0, device 16, function 0 00:11:28.968 ===================================================== 00:11:28.968 Reservations: Not Supported 00:11:28.968 ===================================================== 00:11:28.968 NVMe Controller at PCI bus 0, device 17, function 0 00:11:28.968 ===================================================== 00:11:28.968 Reservations: Not Supported 00:11:28.968 ===================================================== 00:11:28.968 NVMe Controller at PCI bus 0, device 19, function 0 00:11:28.968 ===================================================== 00:11:28.968 Reservations: Not Supported 00:11:28.968 ===================================================== 00:11:28.968 NVMe Controller at PCI bus 0, device 18, function 0 00:11:28.968 ===================================================== 00:11:28.968 Reservations: Not Supported 00:11:28.968 Reservation test passed 00:11:28.968 00:11:28.968 real 0m0.333s 00:11:28.968 user 0m0.122s 00:11:28.968 sys 0m0.162s 00:11:28.968 19:33:19 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.968 19:33:19 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:28.968 ************************************ 00:11:28.968 END TEST nvme_reserve 00:11:28.968 ************************************ 00:11:28.968 19:33:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:28.968 19:33:19 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:28.968 19:33:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:28.968 19:33:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.968 19:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.968 ************************************ 00:11:28.968 START TEST nvme_err_injection 00:11:28.968 ************************************ 00:11:28.968 19:33:19 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:29.226 NVMe Error Injection test 00:11:29.226 Attached to 0000:00:10.0 00:11:29.226 Attached to 0000:00:11.0 00:11:29.226 Attached to 0000:00:13.0 00:11:29.226 Attached to 0000:00:12.0 00:11:29.226 0000:00:10.0: get features failed as expected 00:11:29.226 0000:00:11.0: get features failed as expected 00:11:29.226 0000:00:13.0: get features failed as expected 00:11:29.226 0000:00:12.0: get features failed as expected 00:11:29.226 0000:00:11.0: get features successfully as expected 00:11:29.226 0000:00:13.0: get features successfully as expected 00:11:29.226 0000:00:12.0: get features successfully as expected 00:11:29.226 0000:00:10.0: get features successfully as expected 00:11:29.226 0000:00:10.0: read failed as expected 00:11:29.226 0000:00:11.0: read failed as expected 00:11:29.226 0000:00:13.0: read failed as expected 00:11:29.226 0000:00:12.0: read failed as expected 00:11:29.226 0000:00:10.0: read successfully as expected 00:11:29.226 0000:00:11.0: read successfully as expected 00:11:29.226 0000:00:13.0: read successfully as expected 00:11:29.226 0000:00:12.0: read successfully as expected 00:11:29.226 Cleaning up... 00:11:29.226 00:11:29.226 real 0m0.341s 00:11:29.226 user 0m0.124s 00:11:29.226 sys 0m0.170s 00:11:29.226 19:33:19 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.226 19:33:19 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:29.226 ************************************ 00:11:29.226 END TEST nvme_err_injection 00:11:29.226 ************************************ 00:11:29.226 19:33:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:29.226 19:33:19 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:29.226 19:33:19 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:29.226 19:33:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.226 19:33:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:29.226 ************************************ 00:11:29.226 START TEST nvme_overhead 00:11:29.226 ************************************ 00:11:29.226 19:33:19 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:30.697 Initializing NVMe Controllers 00:11:30.697 Attached to 0000:00:10.0 00:11:30.697 Attached to 0000:00:11.0 00:11:30.697 Attached to 0000:00:13.0 00:11:30.697 Attached to 0000:00:12.0 00:11:30.697 Initialization complete. Launching workers. 00:11:30.697 submit (in ns) avg, min, max = 15286.8, 12411.4, 117490.5 00:11:30.697 complete (in ns) avg, min, max = 9978.6, 7875.2, 137221.9 00:11:30.697 00:11:30.697 Submit histogram 00:11:30.698 ================ 00:11:30.698 Range in us Cumulative Count 00:11:30.698 12.373 - 12.434: 0.0236% ( 3) 00:11:30.698 12.495 - 12.556: 0.0394% ( 2) 00:11:30.698 12.556 - 12.617: 0.0709% ( 4) 00:11:30.698 12.617 - 12.678: 0.0788% ( 1) 00:11:30.698 12.678 - 12.739: 0.1103% ( 4) 00:11:30.698 12.800 - 12.861: 0.1340% ( 3) 00:11:30.698 12.922 - 12.983: 0.1497% ( 2) 00:11:30.698 13.227 - 13.288: 0.1734% ( 3) 00:11:30.698 13.288 - 13.349: 0.4885% ( 40) 00:11:30.698 13.349 - 13.410: 1.2765% ( 100) 00:11:30.698 13.410 - 13.470: 2.9942% ( 218) 00:11:30.698 13.470 - 13.531: 5.4290% ( 309) 00:11:30.698 13.531 - 13.592: 8.2657% ( 360) 00:11:30.698 13.592 - 13.653: 11.9061% ( 462) 00:11:30.698 13.653 - 13.714: 14.8924% ( 379) 00:11:30.698 13.714 - 13.775: 17.5715% ( 340) 00:11:30.698 13.775 - 13.836: 19.5493% ( 251) 00:11:30.698 13.836 - 13.897: 21.0622% ( 192) 00:11:30.698 13.897 - 13.958: 21.9920% ( 118) 00:11:30.698 13.958 - 14.019: 22.8587% ( 110) 00:11:30.698 14.019 - 14.080: 23.3945% ( 68) 00:11:30.698 14.080 - 14.141: 24.0958% ( 89) 00:11:30.698 14.141 - 14.202: 25.3408% ( 158) 00:11:30.698 14.202 - 14.263: 27.7126% ( 301) 00:11:30.698 14.263 - 14.324: 31.7627% ( 514) 00:11:30.698 14.324 - 14.385: 37.1050% ( 678) 00:11:30.698 14.385 - 14.446: 43.2905% ( 785) 00:11:30.698 14.446 - 14.507: 49.2081% ( 751) 00:11:30.698 14.507 - 14.568: 54.1722% ( 630) 00:11:30.698 14.568 - 14.629: 58.0254% ( 489) 00:11:30.698 14.629 - 14.690: 61.3112% ( 417) 00:11:30.698 14.690 - 14.750: 64.6127% ( 419) 00:11:30.698 14.750 - 14.811: 67.9615% ( 425) 00:11:30.698 14.811 - 14.872: 70.7824% ( 358) 00:11:30.698 14.872 - 14.933: 73.6743% ( 367) 00:11:30.698 14.933 - 14.994: 75.8727% ( 279) 00:11:30.698 14.994 - 15.055: 77.6298% ( 223) 00:11:30.698 15.055 - 15.116: 78.9615% ( 169) 00:11:30.698 15.116 - 15.177: 80.0883% ( 143) 00:11:30.698 15.177 - 15.238: 81.1914% ( 140) 00:11:30.698 15.238 - 15.299: 82.1685% ( 124) 00:11:30.698 15.299 - 15.360: 82.9801% ( 103) 00:11:30.698 15.360 - 15.421: 83.6341% ( 83) 00:11:30.698 15.421 - 15.482: 84.3275% ( 88) 00:11:30.698 15.482 - 15.543: 84.8239% ( 63) 00:11:30.698 15.543 - 15.604: 85.2809% ( 58) 00:11:30.698 15.604 - 15.726: 86.1083% ( 105) 00:11:30.698 15.726 - 15.848: 86.7229% ( 78) 00:11:30.698 15.848 - 15.970: 87.2508% ( 67) 00:11:30.698 15.970 - 16.091: 87.6369% ( 49) 00:11:30.698 16.091 - 16.213: 87.9285% ( 37) 00:11:30.698 16.213 - 16.335: 88.1254% ( 25) 00:11:30.698 16.335 - 16.457: 88.2121% ( 11) 00:11:30.698 16.457 - 16.579: 88.3461% ( 17) 00:11:30.698 16.579 - 16.701: 88.4800% ( 17) 00:11:30.698 16.701 - 16.823: 88.5746% ( 12) 00:11:30.698 16.823 - 16.945: 88.6613% ( 11) 00:11:30.698 16.945 - 17.067: 88.7716% ( 14) 00:11:30.698 17.067 - 17.189: 88.8425% ( 9) 00:11:30.698 17.189 - 17.310: 88.9055% ( 8) 00:11:30.698 17.310 - 17.432: 88.9607% ( 7) 00:11:30.698 17.432 - 17.554: 89.0080% ( 6) 00:11:30.698 17.554 - 17.676: 89.0868% ( 10) 00:11:30.698 17.676 - 17.798: 89.1656% ( 10) 00:11:30.698 17.798 - 17.920: 89.2286% ( 8) 00:11:30.698 17.920 - 18.042: 89.3231% ( 12) 00:11:30.698 18.042 - 18.164: 89.3862% ( 8) 00:11:30.698 18.164 - 18.286: 89.5753% ( 24) 00:11:30.698 18.286 - 18.408: 89.7644% ( 24) 00:11:30.698 18.408 - 18.530: 89.9614% ( 25) 00:11:30.698 18.530 - 18.651: 90.1347% ( 22) 00:11:30.698 18.651 - 18.773: 90.3317% ( 25) 00:11:30.698 18.773 - 18.895: 90.5130% ( 23) 00:11:30.698 18.895 - 19.017: 90.8360% ( 41) 00:11:30.698 19.017 - 19.139: 91.0803% ( 31) 00:11:30.698 19.139 - 19.261: 91.3797% ( 38) 00:11:30.698 19.261 - 19.383: 91.6634% ( 36) 00:11:30.698 19.383 - 19.505: 91.8998% ( 30) 00:11:30.698 19.505 - 19.627: 92.1756% ( 35) 00:11:30.698 19.627 - 19.749: 92.5774% ( 51) 00:11:30.698 19.749 - 19.870: 92.9556% ( 48) 00:11:30.698 19.870 - 19.992: 93.3733% ( 53) 00:11:30.698 19.992 - 20.114: 93.7988% ( 54) 00:11:30.698 20.114 - 20.236: 94.1139% ( 40) 00:11:30.698 20.236 - 20.358: 94.5473% ( 55) 00:11:30.698 20.358 - 20.480: 94.9098% ( 46) 00:11:30.698 20.480 - 20.602: 95.1777% ( 34) 00:11:30.698 20.602 - 20.724: 95.4692% ( 37) 00:11:30.698 20.724 - 20.846: 95.7371% ( 34) 00:11:30.698 20.846 - 20.968: 95.8947% ( 20) 00:11:30.698 20.968 - 21.090: 96.0681% ( 22) 00:11:30.698 21.090 - 21.211: 96.2178% ( 19) 00:11:30.698 21.211 - 21.333: 96.2966% ( 10) 00:11:30.698 21.333 - 21.455: 96.4542% ( 20) 00:11:30.698 21.455 - 21.577: 96.6197% ( 21) 00:11:30.698 21.577 - 21.699: 96.7536% ( 17) 00:11:30.698 21.699 - 21.821: 96.8324% ( 10) 00:11:30.698 21.821 - 21.943: 96.9348% ( 13) 00:11:30.698 21.943 - 22.065: 97.0609% ( 16) 00:11:30.698 22.065 - 22.187: 97.1082% ( 6) 00:11:30.698 22.187 - 22.309: 97.1712% ( 8) 00:11:30.698 22.309 - 22.430: 97.2421% ( 9) 00:11:30.698 22.430 - 22.552: 97.2579% ( 2) 00:11:30.698 22.552 - 22.674: 97.3131% ( 7) 00:11:30.698 22.674 - 22.796: 97.3603% ( 6) 00:11:30.698 22.796 - 22.918: 97.3840% ( 3) 00:11:30.698 22.918 - 23.040: 97.4313% ( 6) 00:11:30.698 23.040 - 23.162: 97.5022% ( 9) 00:11:30.698 23.162 - 23.284: 97.5573% ( 7) 00:11:30.698 23.284 - 23.406: 97.5652% ( 1) 00:11:30.698 23.406 - 23.528: 97.6204% ( 7) 00:11:30.698 23.528 - 23.650: 97.6519% ( 4) 00:11:30.698 23.650 - 23.771: 97.6834% ( 4) 00:11:30.698 23.771 - 23.893: 97.7149% ( 4) 00:11:30.698 24.015 - 24.137: 97.7622% ( 6) 00:11:30.698 24.137 - 24.259: 97.7937% ( 4) 00:11:30.698 24.259 - 24.381: 97.8016% ( 1) 00:11:30.698 24.381 - 24.503: 97.8095% ( 1) 00:11:30.698 24.503 - 24.625: 97.8174% ( 1) 00:11:30.698 24.625 - 24.747: 97.8410% ( 3) 00:11:30.698 24.747 - 24.869: 97.8646% ( 3) 00:11:30.698 24.869 - 24.990: 97.9513% ( 11) 00:11:30.698 24.990 - 25.112: 98.0143% ( 8) 00:11:30.698 25.112 - 25.234: 98.1089% ( 12) 00:11:30.698 25.234 - 25.356: 98.1877% ( 10) 00:11:30.698 25.356 - 25.478: 98.2822% ( 12) 00:11:30.698 25.478 - 25.600: 98.3453% ( 8) 00:11:30.698 25.600 - 25.722: 98.4556% ( 14) 00:11:30.698 25.722 - 25.844: 98.5344% ( 10) 00:11:30.698 25.844 - 25.966: 98.6132% ( 10) 00:11:30.698 25.966 - 26.088: 98.6999% ( 11) 00:11:30.698 26.088 - 26.210: 98.7550% ( 7) 00:11:30.698 26.210 - 26.331: 98.7708% ( 2) 00:11:30.698 26.331 - 26.453: 98.7944% ( 3) 00:11:30.698 26.453 - 26.575: 98.8417% ( 6) 00:11:30.698 26.575 - 26.697: 98.8732% ( 4) 00:11:30.698 26.697 - 26.819: 98.8969% ( 3) 00:11:30.698 26.819 - 26.941: 98.9205% ( 3) 00:11:30.698 26.941 - 27.063: 98.9441% ( 3) 00:11:30.698 27.063 - 27.185: 98.9520% ( 1) 00:11:30.698 27.185 - 27.307: 98.9678% ( 2) 00:11:30.698 27.307 - 27.429: 98.9835% ( 2) 00:11:30.698 27.429 - 27.550: 98.9993% ( 2) 00:11:30.698 27.550 - 27.672: 99.0072% ( 1) 00:11:30.698 28.282 - 28.404: 99.0229% ( 2) 00:11:30.698 28.526 - 28.648: 99.0387% ( 2) 00:11:30.698 28.648 - 28.770: 99.0466% ( 1) 00:11:30.698 28.891 - 29.013: 99.0623% ( 2) 00:11:30.698 29.135 - 29.257: 99.0781% ( 2) 00:11:30.698 29.257 - 29.379: 99.0860% ( 1) 00:11:30.698 29.501 - 29.623: 99.0938% ( 1) 00:11:30.698 29.623 - 29.745: 99.1096% ( 2) 00:11:30.698 29.745 - 29.867: 99.1175% ( 1) 00:11:30.698 29.867 - 29.989: 99.1332% ( 2) 00:11:30.698 29.989 - 30.110: 99.1411% ( 1) 00:11:30.698 30.110 - 30.232: 99.1648% ( 3) 00:11:30.698 30.232 - 30.354: 99.1805% ( 2) 00:11:30.698 30.354 - 30.476: 99.2278% ( 6) 00:11:30.698 30.476 - 30.598: 99.2514% ( 3) 00:11:30.698 30.598 - 30.720: 99.2830% ( 4) 00:11:30.698 30.720 - 30.842: 99.3066% ( 3) 00:11:30.698 30.842 - 30.964: 99.3381% ( 4) 00:11:30.698 30.964 - 31.086: 99.3696% ( 4) 00:11:30.698 31.086 - 31.208: 99.3854% ( 2) 00:11:30.698 31.208 - 31.451: 99.4248% ( 5) 00:11:30.698 31.451 - 31.695: 99.5036% ( 10) 00:11:30.698 31.695 - 31.939: 99.5351% ( 4) 00:11:30.698 31.939 - 32.183: 99.5509% ( 2) 00:11:30.698 32.183 - 32.427: 99.5903% ( 5) 00:11:30.698 32.427 - 32.670: 99.6375% ( 6) 00:11:30.698 32.670 - 32.914: 99.6454% ( 1) 00:11:30.698 32.914 - 33.158: 99.6691% ( 3) 00:11:30.698 33.158 - 33.402: 99.6848% ( 2) 00:11:30.698 33.402 - 33.646: 99.7085% ( 3) 00:11:30.698 33.646 - 33.890: 99.7163% ( 1) 00:11:30.698 33.890 - 34.133: 99.7321% ( 2) 00:11:30.698 34.133 - 34.377: 99.7479% ( 2) 00:11:30.698 34.377 - 34.621: 99.7557% ( 1) 00:11:30.698 34.621 - 34.865: 99.7636% ( 1) 00:11:30.698 35.109 - 35.352: 99.7715% ( 1) 00:11:30.698 35.352 - 35.596: 99.7794% ( 1) 00:11:30.698 35.596 - 35.840: 99.8030% ( 3) 00:11:30.698 35.840 - 36.084: 99.8266% ( 3) 00:11:30.698 36.328 - 36.571: 99.8503% ( 3) 00:11:30.698 36.815 - 37.059: 99.8582% ( 1) 00:11:30.698 37.059 - 37.303: 99.8739% ( 2) 00:11:30.698 37.303 - 37.547: 99.8818% ( 1) 00:11:30.698 37.790 - 38.034: 99.8897% ( 1) 00:11:30.698 38.278 - 38.522: 99.9054% ( 2) 00:11:30.698 39.253 - 39.497: 99.9133% ( 1) 00:11:30.698 39.497 - 39.741: 99.9291% ( 2) 00:11:30.698 42.910 - 43.154: 99.9370% ( 1) 00:11:30.698 43.398 - 43.642: 99.9448% ( 1) 00:11:30.698 43.642 - 43.886: 99.9527% ( 1) 00:11:30.698 44.373 - 44.617: 99.9606% ( 1) 00:11:30.698 49.981 - 50.225: 99.9685% ( 1) 00:11:30.698 66.316 - 66.804: 99.9764% ( 1) 00:11:30.699 97.524 - 98.011: 99.9842% ( 1) 00:11:30.699 98.499 - 98.987: 99.9921% ( 1) 00:11:30.699 117.029 - 117.516: 100.0000% ( 1) 00:11:30.699 00:11:30.699 Complete histogram 00:11:30.699 ================== 00:11:30.699 Range in us Cumulative Count 00:11:30.699 7.863 - 7.924: 0.0236% ( 3) 00:11:30.699 7.924 - 7.985: 0.0709% ( 6) 00:11:30.699 7.985 - 8.046: 0.1024% ( 4) 00:11:30.699 8.046 - 8.107: 0.1103% ( 1) 00:11:30.699 8.107 - 8.168: 0.1182% ( 1) 00:11:30.699 8.168 - 8.229: 0.1261% ( 1) 00:11:30.699 8.472 - 8.533: 0.1340% ( 1) 00:11:30.699 8.533 - 8.594: 0.3309% ( 25) 00:11:30.699 8.594 - 8.655: 2.9627% ( 334) 00:11:30.699 8.655 - 8.716: 9.4398% ( 822) 00:11:30.699 8.716 - 8.777: 14.6718% ( 664) 00:11:30.699 8.777 - 8.838: 17.8867% ( 408) 00:11:30.699 8.838 - 8.899: 20.0772% ( 278) 00:11:30.699 8.899 - 8.960: 21.6689% ( 202) 00:11:30.699 8.960 - 9.021: 22.6932% ( 130) 00:11:30.699 9.021 - 9.082: 23.3000% ( 77) 00:11:30.699 9.082 - 9.143: 23.7885% ( 62) 00:11:30.699 9.143 - 9.204: 27.8780% ( 519) 00:11:30.699 9.204 - 9.265: 37.4596% ( 1216) 00:11:30.699 9.265 - 9.326: 47.4037% ( 1262) 00:11:30.699 9.326 - 9.387: 54.0147% ( 839) 00:11:30.699 9.387 - 9.448: 58.3800% ( 554) 00:11:30.699 9.448 - 9.509: 62.4379% ( 515) 00:11:30.699 9.509 - 9.570: 68.1428% ( 724) 00:11:30.699 9.570 - 9.630: 73.1148% ( 631) 00:11:30.699 9.630 - 9.691: 76.6843% ( 453) 00:11:30.699 9.691 - 9.752: 78.9851% ( 292) 00:11:30.699 9.752 - 9.813: 80.9865% ( 254) 00:11:30.699 9.813 - 9.874: 82.5940% ( 204) 00:11:30.699 9.874 - 9.935: 83.8153% ( 155) 00:11:30.699 9.935 - 9.996: 84.5402% ( 92) 00:11:30.699 9.996 - 10.057: 85.3361% ( 101) 00:11:30.699 10.057 - 10.118: 86.0058% ( 85) 00:11:30.699 10.118 - 10.179: 86.5495% ( 69) 00:11:30.699 10.179 - 10.240: 87.1247% ( 73) 00:11:30.699 10.240 - 10.301: 87.5345% ( 52) 00:11:30.699 10.301 - 10.362: 87.8891% ( 45) 00:11:30.699 10.362 - 10.423: 88.2042% ( 40) 00:11:30.699 10.423 - 10.484: 88.5037% ( 38) 00:11:30.699 10.484 - 10.545: 88.7558% ( 32) 00:11:30.699 10.545 - 10.606: 88.9607% ( 26) 00:11:30.699 10.606 - 10.667: 89.1183% ( 20) 00:11:30.699 10.667 - 10.728: 89.2759% ( 20) 00:11:30.699 10.728 - 10.789: 89.4177% ( 18) 00:11:30.699 10.789 - 10.850: 89.4886% ( 9) 00:11:30.699 10.850 - 10.910: 89.6304% ( 18) 00:11:30.699 10.910 - 10.971: 89.7408% ( 14) 00:11:30.699 10.971 - 11.032: 89.7959% ( 7) 00:11:30.699 11.032 - 11.093: 89.8511% ( 7) 00:11:30.699 11.093 - 11.154: 89.9299% ( 10) 00:11:30.699 11.154 - 11.215: 89.9535% ( 3) 00:11:30.699 11.215 - 11.276: 90.0165% ( 8) 00:11:30.699 11.276 - 11.337: 90.0875% ( 9) 00:11:30.699 11.337 - 11.398: 90.1426% ( 7) 00:11:30.699 11.398 - 11.459: 90.2293% ( 11) 00:11:30.699 11.459 - 11.520: 90.3002% ( 9) 00:11:30.699 11.520 - 11.581: 90.4184% ( 15) 00:11:30.699 11.581 - 11.642: 90.4578% ( 5) 00:11:30.699 11.642 - 11.703: 90.5602% ( 13) 00:11:30.699 11.703 - 11.764: 90.6312% ( 9) 00:11:30.699 11.764 - 11.825: 90.6706% ( 5) 00:11:30.699 11.825 - 11.886: 90.7336% ( 8) 00:11:30.699 11.886 - 11.947: 90.8203% ( 11) 00:11:30.699 11.947 - 12.008: 90.9148% ( 12) 00:11:30.699 12.008 - 12.069: 90.9463% ( 4) 00:11:30.699 12.069 - 12.130: 91.0645% ( 15) 00:11:30.699 12.130 - 12.190: 91.2064% ( 18) 00:11:30.699 12.190 - 12.251: 91.2852% ( 10) 00:11:30.699 12.251 - 12.312: 91.3482% ( 8) 00:11:30.699 12.312 - 12.373: 91.4034% ( 7) 00:11:30.699 12.373 - 12.434: 91.4822% ( 10) 00:11:30.699 12.434 - 12.495: 91.5058% ( 3) 00:11:30.699 12.495 - 12.556: 91.5609% ( 7) 00:11:30.699 12.556 - 12.617: 91.6476% ( 11) 00:11:30.699 12.617 - 12.678: 91.7264% ( 10) 00:11:30.699 12.678 - 12.739: 91.8210% ( 12) 00:11:30.699 12.739 - 12.800: 91.9470% ( 16) 00:11:30.699 12.800 - 12.861: 92.1677% ( 28) 00:11:30.699 12.861 - 12.922: 92.3725% ( 26) 00:11:30.699 12.922 - 12.983: 92.5853% ( 27) 00:11:30.699 12.983 - 13.044: 92.7350% ( 19) 00:11:30.699 13.044 - 13.105: 92.9084% ( 22) 00:11:30.699 13.105 - 13.166: 93.0896% ( 23) 00:11:30.699 13.166 - 13.227: 93.1999% ( 14) 00:11:30.699 13.227 - 13.288: 93.3654% ( 21) 00:11:30.699 13.288 - 13.349: 93.4915% ( 16) 00:11:30.699 13.349 - 13.410: 93.6412% ( 19) 00:11:30.699 13.410 - 13.470: 93.8066% ( 21) 00:11:30.699 13.470 - 13.531: 94.1139% ( 39) 00:11:30.699 13.531 - 13.592: 94.2952% ( 23) 00:11:30.699 13.592 - 13.653: 94.4449% ( 19) 00:11:30.699 13.653 - 13.714: 94.6025% ( 20) 00:11:30.699 13.714 - 13.775: 94.7522% ( 19) 00:11:30.699 13.775 - 13.836: 94.8704% ( 15) 00:11:30.699 13.836 - 13.897: 95.0201% ( 19) 00:11:30.699 13.897 - 13.958: 95.1619% ( 18) 00:11:30.699 13.958 - 14.019: 95.3274% ( 21) 00:11:30.699 14.019 - 14.080: 95.4220% ( 12) 00:11:30.699 14.080 - 14.141: 95.5480% ( 16) 00:11:30.699 14.141 - 14.202: 95.6662% ( 15) 00:11:30.699 14.202 - 14.263: 95.7371% ( 9) 00:11:30.699 14.263 - 14.324: 95.8396% ( 13) 00:11:30.699 14.324 - 14.385: 95.9105% ( 9) 00:11:30.699 14.385 - 14.446: 95.9814% ( 9) 00:11:30.699 14.446 - 14.507: 96.0523% ( 9) 00:11:30.699 14.507 - 14.568: 96.0917% ( 5) 00:11:30.699 14.568 - 14.629: 96.1311% ( 5) 00:11:30.699 14.629 - 14.690: 96.1942% ( 8) 00:11:30.699 14.690 - 14.750: 96.2572% ( 8) 00:11:30.699 14.750 - 14.811: 96.3045% ( 6) 00:11:30.699 14.811 - 14.872: 96.3281% ( 3) 00:11:30.699 14.872 - 14.933: 96.3439% ( 2) 00:11:30.699 14.933 - 14.994: 96.3754% ( 4) 00:11:30.699 14.994 - 15.055: 96.4227% ( 6) 00:11:30.699 15.116 - 15.177: 96.4542% ( 4) 00:11:30.699 15.177 - 15.238: 96.4857% ( 4) 00:11:30.699 15.238 - 15.299: 96.5015% ( 2) 00:11:30.699 15.299 - 15.360: 96.5645% ( 8) 00:11:30.699 15.360 - 15.421: 96.6512% ( 11) 00:11:30.699 15.421 - 15.482: 96.6827% ( 4) 00:11:30.699 15.482 - 15.543: 96.7221% ( 5) 00:11:30.699 15.543 - 15.604: 96.7536% ( 4) 00:11:30.699 15.604 - 15.726: 96.8009% ( 6) 00:11:30.699 15.726 - 15.848: 96.8639% ( 8) 00:11:30.699 15.848 - 15.970: 96.9270% ( 8) 00:11:30.699 15.970 - 16.091: 96.9979% ( 9) 00:11:30.699 16.091 - 16.213: 97.0452% ( 6) 00:11:30.699 16.213 - 16.335: 97.1239% ( 10) 00:11:30.699 16.335 - 16.457: 97.1633% ( 5) 00:11:30.699 16.457 - 16.579: 97.2343% ( 9) 00:11:30.699 16.579 - 16.701: 97.2815% ( 6) 00:11:30.699 16.701 - 16.823: 97.3761% ( 12) 00:11:30.699 16.823 - 16.945: 97.4470% ( 9) 00:11:30.699 16.945 - 17.067: 97.5888% ( 18) 00:11:30.699 17.067 - 17.189: 97.6440% ( 7) 00:11:30.699 17.189 - 17.310: 97.7149% ( 9) 00:11:30.699 17.310 - 17.432: 97.7780% ( 8) 00:11:30.699 17.432 - 17.554: 97.8331% ( 7) 00:11:30.699 17.554 - 17.676: 97.8725% ( 5) 00:11:30.699 17.676 - 17.798: 97.9040% ( 4) 00:11:30.699 17.798 - 17.920: 97.9198% ( 2) 00:11:30.699 17.920 - 18.042: 97.9277% ( 1) 00:11:30.699 18.042 - 18.164: 97.9434% ( 2) 00:11:30.699 18.164 - 18.286: 97.9592% ( 2) 00:11:30.699 18.286 - 18.408: 97.9749% ( 2) 00:11:30.699 18.408 - 18.530: 97.9986% ( 3) 00:11:30.699 18.530 - 18.651: 98.0065% ( 1) 00:11:30.699 18.651 - 18.773: 98.0301% ( 3) 00:11:30.699 18.773 - 18.895: 98.0459% ( 2) 00:11:30.699 18.895 - 19.017: 98.0616% ( 2) 00:11:30.699 19.017 - 19.139: 98.0853% ( 3) 00:11:30.699 19.383 - 19.505: 98.1089% ( 3) 00:11:30.699 19.505 - 19.627: 98.1168% ( 1) 00:11:30.699 19.627 - 19.749: 98.1247% ( 1) 00:11:30.699 19.749 - 19.870: 98.1641% ( 5) 00:11:30.699 19.870 - 19.992: 98.2113% ( 6) 00:11:30.699 19.992 - 20.114: 98.3138% ( 13) 00:11:30.699 20.114 - 20.236: 98.3532% ( 5) 00:11:30.699 20.236 - 20.358: 98.4162% ( 8) 00:11:30.699 20.358 - 20.480: 98.5344% ( 15) 00:11:30.699 20.480 - 20.602: 98.6211% ( 11) 00:11:30.699 20.602 - 20.724: 98.7077% ( 11) 00:11:30.699 20.724 - 20.846: 98.8023% ( 12) 00:11:30.699 20.846 - 20.968: 98.8417% ( 5) 00:11:30.699 20.968 - 21.090: 98.8969% ( 7) 00:11:30.699 21.090 - 21.211: 98.9599% ( 8) 00:11:30.699 21.211 - 21.333: 99.0072% ( 6) 00:11:30.699 21.333 - 21.455: 99.0623% ( 7) 00:11:30.699 21.455 - 21.577: 99.0860% ( 3) 00:11:30.699 21.577 - 21.699: 99.1096% ( 3) 00:11:30.699 21.699 - 21.821: 99.1175% ( 1) 00:11:30.699 21.821 - 21.943: 99.1726% ( 7) 00:11:30.699 21.943 - 22.065: 99.1884% ( 2) 00:11:30.699 22.065 - 22.187: 99.1963% ( 1) 00:11:30.699 22.187 - 22.309: 99.2120% ( 2) 00:11:30.699 22.430 - 22.552: 99.2278% ( 2) 00:11:30.699 22.552 - 22.674: 99.2357% ( 1) 00:11:30.699 22.674 - 22.796: 99.2436% ( 1) 00:11:30.699 22.796 - 22.918: 99.2593% ( 2) 00:11:30.699 23.162 - 23.284: 99.2751% ( 2) 00:11:30.699 23.406 - 23.528: 99.2830% ( 1) 00:11:30.699 23.650 - 23.771: 99.2908% ( 1) 00:11:30.699 23.771 - 23.893: 99.2987% ( 1) 00:11:30.699 24.137 - 24.259: 99.3145% ( 2) 00:11:30.699 24.259 - 24.381: 99.3302% ( 2) 00:11:30.699 24.381 - 24.503: 99.3381% ( 1) 00:11:30.699 24.625 - 24.747: 99.3460% ( 1) 00:11:30.699 24.747 - 24.869: 99.3696% ( 3) 00:11:30.699 25.112 - 25.234: 99.3775% ( 1) 00:11:30.699 25.234 - 25.356: 99.4327% ( 7) 00:11:30.699 25.356 - 25.478: 99.4405% ( 1) 00:11:30.699 25.478 - 25.600: 99.4563% ( 2) 00:11:30.699 25.600 - 25.722: 99.4957% ( 5) 00:11:30.699 25.722 - 25.844: 99.5272% ( 4) 00:11:30.699 25.844 - 25.966: 99.5587% ( 4) 00:11:30.699 25.966 - 26.088: 99.5824% ( 3) 00:11:30.700 26.088 - 26.210: 99.6060% ( 3) 00:11:30.700 26.210 - 26.331: 99.6297% ( 3) 00:11:30.700 26.331 - 26.453: 99.6375% ( 1) 00:11:30.700 26.453 - 26.575: 99.6612% ( 3) 00:11:30.700 26.575 - 26.697: 99.6848% ( 3) 00:11:30.700 26.941 - 27.063: 99.6927% ( 1) 00:11:30.700 27.063 - 27.185: 99.7006% ( 1) 00:11:30.700 27.185 - 27.307: 99.7163% ( 2) 00:11:30.700 27.307 - 27.429: 99.7242% ( 1) 00:11:30.700 28.038 - 28.160: 99.7321% ( 1) 00:11:30.700 28.160 - 28.282: 99.7400% ( 1) 00:11:30.700 28.770 - 28.891: 99.7479% ( 1) 00:11:30.700 29.135 - 29.257: 99.7557% ( 1) 00:11:30.700 29.257 - 29.379: 99.7636% ( 1) 00:11:30.700 29.501 - 29.623: 99.7794% ( 2) 00:11:30.700 29.745 - 29.867: 99.7873% ( 1) 00:11:30.700 29.989 - 30.110: 99.7951% ( 1) 00:11:30.700 30.720 - 30.842: 99.8345% ( 5) 00:11:30.700 31.208 - 31.451: 99.8503% ( 2) 00:11:30.700 31.695 - 31.939: 99.8582% ( 1) 00:11:30.700 32.183 - 32.427: 99.8660% ( 1) 00:11:30.700 32.427 - 32.670: 99.8818% ( 2) 00:11:30.700 32.670 - 32.914: 99.8976% ( 2) 00:11:30.700 33.646 - 33.890: 99.9054% ( 1) 00:11:30.700 34.377 - 34.621: 99.9133% ( 1) 00:11:30.700 35.109 - 35.352: 99.9212% ( 1) 00:11:30.700 36.328 - 36.571: 99.9370% ( 2) 00:11:30.700 36.571 - 36.815: 99.9448% ( 1) 00:11:30.700 37.547 - 37.790: 99.9527% ( 1) 00:11:30.700 39.253 - 39.497: 99.9606% ( 1) 00:11:30.700 42.667 - 42.910: 99.9685% ( 1) 00:11:30.700 53.638 - 53.882: 99.9764% ( 1) 00:11:30.700 56.076 - 56.320: 99.9842% ( 1) 00:11:30.700 115.566 - 116.053: 99.9921% ( 1) 00:11:30.700 136.533 - 137.509: 100.0000% ( 1) 00:11:30.700 00:11:30.700 00:11:30.700 real 0m1.294s 00:11:30.700 user 0m1.088s 00:11:30.700 sys 0m0.153s 00:11:30.700 ************************************ 00:11:30.700 END TEST nvme_overhead 00:11:30.700 ************************************ 00:11:30.700 19:33:21 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.700 19:33:21 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:30.700 19:33:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:30.700 19:33:21 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:30.700 19:33:21 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:30.700 19:33:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.700 19:33:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:30.700 ************************************ 00:11:30.700 START TEST nvme_arbitration 00:11:30.700 ************************************ 00:11:30.700 19:33:21 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:33.983 Initializing NVMe Controllers 00:11:33.983 Attached to 0000:00:10.0 00:11:33.983 Attached to 0000:00:11.0 00:11:33.983 Attached to 0000:00:13.0 00:11:33.983 Attached to 0000:00:12.0 00:11:33.983 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:33.983 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:33.983 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:33.983 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:33.983 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:33.983 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:33.983 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:33.984 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:33.984 Initialization complete. Launching workers. 00:11:33.984 Starting thread on core 1 with urgent priority queue 00:11:33.984 Starting thread on core 2 with urgent priority queue 00:11:33.984 Starting thread on core 3 with urgent priority queue 00:11:33.984 Starting thread on core 0 with urgent priority queue 00:11:33.984 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:11:33.984 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:11:33.984 QEMU NVMe Ctrl (12341 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:11:33.984 QEMU NVMe Ctrl (12342 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:11:33.984 QEMU NVMe Ctrl (12343 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:11:33.984 QEMU NVMe Ctrl (12342 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:11:33.984 ======================================================== 00:11:33.984 00:11:33.984 00:11:33.984 real 0m3.430s 00:11:33.984 user 0m9.415s 00:11:33.984 sys 0m0.171s 00:11:33.984 19:33:24 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.984 19:33:24 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:33.984 ************************************ 00:11:33.984 END TEST nvme_arbitration 00:11:33.984 ************************************ 00:11:33.984 19:33:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:33.984 19:33:24 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:33.984 19:33:24 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:33.984 19:33:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.984 19:33:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.984 ************************************ 00:11:33.984 START TEST nvme_single_aen 00:11:33.984 ************************************ 00:11:33.984 19:33:24 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:34.549 Asynchronous Event Request test 00:11:34.549 Attached to 0000:00:10.0 00:11:34.549 Attached to 0000:00:11.0 00:11:34.549 Attached to 0000:00:13.0 00:11:34.549 Attached to 0000:00:12.0 00:11:34.549 Reset controller to setup AER completions for this process 00:11:34.549 Registering asynchronous event callbacks... 00:11:34.549 Getting orig temperature thresholds of all controllers 00:11:34.549 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.549 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.549 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.549 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.549 Setting all controllers temperature threshold low to trigger AER 00:11:34.549 Waiting for all controllers temperature threshold to be set lower 00:11:34.549 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.549 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:34.549 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.549 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:34.549 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.549 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:34.549 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.549 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:34.549 Waiting for all controllers to trigger AER and reset threshold 00:11:34.549 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.549 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.549 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.549 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.549 Cleaning up... 00:11:34.549 00:11:34.549 real 0m0.338s 00:11:34.549 user 0m0.116s 00:11:34.549 sys 0m0.174s 00:11:34.549 19:33:25 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.549 19:33:25 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:34.549 ************************************ 00:11:34.549 END TEST nvme_single_aen 00:11:34.549 ************************************ 00:11:34.549 19:33:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:34.549 19:33:25 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:34.549 19:33:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:34.549 19:33:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.549 19:33:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.549 ************************************ 00:11:34.549 START TEST nvme_doorbell_aers 00:11:34.549 ************************************ 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:34.549 19:33:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:34.805 [2024-07-15 19:33:25.536122] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:11:44.814 Executing: test_write_invalid_db 00:11:44.814 Waiting for AER completion... 00:11:44.814 Failure: test_write_invalid_db 00:11:44.814 00:11:44.814 Executing: test_invalid_db_write_overflow_sq 00:11:44.814 Waiting for AER completion... 00:11:44.814 Failure: test_invalid_db_write_overflow_sq 00:11:44.814 00:11:44.814 Executing: test_invalid_db_write_overflow_cq 00:11:44.814 Waiting for AER completion... 00:11:44.814 Failure: test_invalid_db_write_overflow_cq 00:11:44.814 00:11:44.814 19:33:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:44.814 19:33:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:44.814 [2024-07-15 19:33:35.592244] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:11:54.798 Executing: test_write_invalid_db 00:11:54.798 Waiting for AER completion... 00:11:54.798 Failure: test_write_invalid_db 00:11:54.798 00:11:54.798 Executing: test_invalid_db_write_overflow_sq 00:11:54.798 Waiting for AER completion... 00:11:54.798 Failure: test_invalid_db_write_overflow_sq 00:11:54.798 00:11:54.798 Executing: test_invalid_db_write_overflow_cq 00:11:54.798 Waiting for AER completion... 00:11:54.798 Failure: test_invalid_db_write_overflow_cq 00:11:54.798 00:11:54.798 19:33:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:54.798 19:33:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:55.056 [2024-07-15 19:33:45.624229] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:05.052 Executing: test_write_invalid_db 00:12:05.052 Waiting for AER completion... 00:12:05.052 Failure: test_write_invalid_db 00:12:05.052 00:12:05.052 Executing: test_invalid_db_write_overflow_sq 00:12:05.052 Waiting for AER completion... 00:12:05.052 Failure: test_invalid_db_write_overflow_sq 00:12:05.052 00:12:05.052 Executing: test_invalid_db_write_overflow_cq 00:12:05.052 Waiting for AER completion... 00:12:05.052 Failure: test_invalid_db_write_overflow_cq 00:12:05.052 00:12:05.052 19:33:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:05.052 19:33:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:05.052 [2024-07-15 19:33:55.698743] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.043 Executing: test_write_invalid_db 00:12:15.043 Waiting for AER completion... 00:12:15.043 Failure: test_write_invalid_db 00:12:15.043 00:12:15.043 Executing: test_invalid_db_write_overflow_sq 00:12:15.043 Waiting for AER completion... 00:12:15.043 Failure: test_invalid_db_write_overflow_sq 00:12:15.043 00:12:15.043 Executing: test_invalid_db_write_overflow_cq 00:12:15.043 Waiting for AER completion... 00:12:15.043 Failure: test_invalid_db_write_overflow_cq 00:12:15.043 00:12:15.043 00:12:15.043 real 0m40.279s 00:12:15.043 user 0m29.634s 00:12:15.043 sys 0m10.268s 00:12:15.043 19:34:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.043 ************************************ 00:12:15.043 END TEST nvme_doorbell_aers 00:12:15.043 ************************************ 00:12:15.043 19:34:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:15.043 19:34:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:15.043 19:34:05 nvme -- nvme/nvme.sh@97 -- # uname 00:12:15.043 19:34:05 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:15.043 19:34:05 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:15.043 19:34:05 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:15.043 19:34:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.044 19:34:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.044 ************************************ 00:12:15.044 START TEST nvme_multi_aen 00:12:15.044 ************************************ 00:12:15.044 19:34:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:15.044 [2024-07-15 19:34:05.798021] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.798146] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.798172] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.800286] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.800344] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.800365] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.801994] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.802042] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.802080] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.803773] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.803857] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 [2024-07-15 19:34:05.803877] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70477) is not found. Dropping the request. 00:12:15.044 Child process pid: 70994 00:12:15.609 [Child] Asynchronous Event Request test 00:12:15.609 [Child] Attached to 0000:00:10.0 00:12:15.609 [Child] Attached to 0000:00:11.0 00:12:15.609 [Child] Attached to 0000:00:13.0 00:12:15.609 [Child] Attached to 0000:00:12.0 00:12:15.609 [Child] Registering asynchronous event callbacks... 00:12:15.609 [Child] Getting orig temperature thresholds of all controllers 00:12:15.609 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:15.609 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 [Child] Cleaning up... 00:12:15.609 Asynchronous Event Request test 00:12:15.609 Attached to 0000:00:10.0 00:12:15.609 Attached to 0000:00:11.0 00:12:15.609 Attached to 0000:00:13.0 00:12:15.609 Attached to 0000:00:12.0 00:12:15.609 Reset controller to setup AER completions for this process 00:12:15.609 Registering asynchronous event callbacks... 00:12:15.609 Getting orig temperature thresholds of all controllers 00:12:15.609 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:15.609 Setting all controllers temperature threshold low to trigger AER 00:12:15.609 Waiting for all controllers temperature threshold to be set lower 00:12:15.609 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:15.609 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:15.609 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:15.609 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:15.609 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:15.609 Waiting for all controllers to trigger AER and reset threshold 00:12:15.609 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.609 Cleaning up... 00:12:15.609 00:12:15.609 real 0m0.698s 00:12:15.609 user 0m0.233s 00:12:15.609 sys 0m0.358s 00:12:15.609 19:34:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.609 ************************************ 00:12:15.609 19:34:06 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:15.609 END TEST nvme_multi_aen 00:12:15.609 ************************************ 00:12:15.609 19:34:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:15.609 19:34:06 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:15.609 19:34:06 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:15.609 19:34:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.609 19:34:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.609 ************************************ 00:12:15.609 START TEST nvme_startup 00:12:15.609 ************************************ 00:12:15.609 19:34:06 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:15.867 Initializing NVMe Controllers 00:12:15.867 Attached to 0000:00:10.0 00:12:15.867 Attached to 0000:00:11.0 00:12:15.867 Attached to 0000:00:13.0 00:12:15.867 Attached to 0000:00:12.0 00:12:15.867 Initialization complete. 00:12:15.867 Time used:190978.641 (us). 00:12:15.867 00:12:15.867 real 0m0.290s 00:12:15.867 user 0m0.092s 00:12:15.867 sys 0m0.154s 00:12:15.867 19:34:06 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.867 ************************************ 00:12:15.867 END TEST nvme_startup 00:12:15.867 ************************************ 00:12:15.867 19:34:06 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:15.867 19:34:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:15.867 19:34:06 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:15.867 19:34:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:15.867 19:34:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.867 19:34:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.867 ************************************ 00:12:15.867 START TEST nvme_multi_secondary 00:12:15.867 ************************************ 00:12:15.867 19:34:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:12:15.867 19:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=71050 00:12:15.867 19:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=71051 00:12:15.867 19:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:15.867 19:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:15.867 19:34:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:19.144 Initializing NVMe Controllers 00:12:19.144 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:19.144 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:19.144 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:19.144 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:19.144 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:19.144 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:19.144 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:19.144 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:19.144 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:19.144 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:19.144 Initialization complete. Launching workers. 00:12:19.144 ======================================================== 00:12:19.144 Latency(us) 00:12:19.144 Device Information : IOPS MiB/s Average min max 00:12:19.145 PCIE (0000:00:10.0) NSID 1 from core 1: 5188.61 20.27 3081.72 1032.56 15389.07 00:12:19.145 PCIE (0000:00:11.0) NSID 1 from core 1: 5188.61 20.27 3083.19 1055.66 14592.76 00:12:19.145 PCIE (0000:00:13.0) NSID 1 from core 1: 5188.61 20.27 3083.12 1063.60 14349.44 00:12:19.145 PCIE (0000:00:12.0) NSID 1 from core 1: 5188.61 20.27 3083.00 1086.64 16626.69 00:12:19.145 PCIE (0000:00:12.0) NSID 2 from core 1: 5188.61 20.27 3083.20 1069.56 16164.25 00:12:19.145 PCIE (0000:00:12.0) NSID 3 from core 1: 5188.61 20.27 3083.46 1063.11 15626.52 00:12:19.145 ======================================================== 00:12:19.145 Total : 31131.65 121.61 3082.95 1032.56 16626.69 00:12:19.145 00:12:19.403 Initializing NVMe Controllers 00:12:19.403 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:19.403 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:19.403 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:19.403 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:19.403 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:19.403 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:19.403 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:19.403 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:19.403 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:19.403 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:19.403 Initialization complete. Launching workers. 00:12:19.403 ======================================================== 00:12:19.403 Latency(us) 00:12:19.403 Device Information : IOPS MiB/s Average min max 00:12:19.403 PCIE (0000:00:10.0) NSID 1 from core 2: 2174.26 8.49 7359.58 1593.13 23360.20 00:12:19.403 PCIE (0000:00:11.0) NSID 1 from core 2: 2174.26 8.49 7368.63 1698.24 19330.54 00:12:19.403 PCIE (0000:00:13.0) NSID 1 from core 2: 2174.26 8.49 7368.78 1657.32 18934.29 00:12:19.403 PCIE (0000:00:12.0) NSID 1 from core 2: 2174.26 8.49 7368.88 1733.36 18939.87 00:12:19.403 PCIE (0000:00:12.0) NSID 2 from core 2: 2174.26 8.49 7368.99 1576.94 18552.87 00:12:19.403 PCIE (0000:00:12.0) NSID 3 from core 2: 2174.26 8.49 7369.05 1633.59 18923.63 00:12:19.403 ======================================================== 00:12:19.403 Total : 13045.55 50.96 7367.32 1576.94 23360.20 00:12:19.403 00:12:19.403 19:34:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 71050 00:12:21.934 Initializing NVMe Controllers 00:12:21.934 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:21.934 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:21.934 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.934 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:21.934 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:21.934 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:21.934 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:21.934 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:21.934 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:21.934 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:21.934 Initialization complete. Launching workers. 00:12:21.934 ======================================================== 00:12:21.934 Latency(us) 00:12:21.934 Device Information : IOPS MiB/s Average min max 00:12:21.934 PCIE (0000:00:10.0) NSID 1 from core 0: 7208.84 28.16 2217.77 1011.83 10780.17 00:12:21.934 PCIE (0000:00:11.0) NSID 1 from core 0: 7208.84 28.16 2218.99 1037.45 11324.70 00:12:21.934 PCIE (0000:00:13.0) NSID 1 from core 0: 7208.84 28.16 2218.95 1024.15 12980.54 00:12:21.934 PCIE (0000:00:12.0) NSID 1 from core 0: 7208.84 28.16 2218.90 1031.13 12600.40 00:12:21.934 PCIE (0000:00:12.0) NSID 2 from core 0: 7208.84 28.16 2218.85 1032.94 11463.07 00:12:21.934 PCIE (0000:00:12.0) NSID 3 from core 0: 7208.84 28.16 2218.80 961.46 10824.83 00:12:21.934 ======================================================== 00:12:21.934 Total : 43253.06 168.96 2218.71 961.46 12980.54 00:12:21.934 00:12:21.934 19:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 71051 00:12:21.934 19:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=71120 00:12:21.934 19:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:21.934 19:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=71121 00:12:21.934 19:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:21.934 19:34:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:25.216 Initializing NVMe Controllers 00:12:25.216 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:25.216 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:25.216 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:25.216 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:25.216 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:25.216 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:25.216 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:25.216 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:25.216 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:25.216 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:25.216 Initialization complete. Launching workers. 00:12:25.216 ======================================================== 00:12:25.216 Latency(us) 00:12:25.216 Device Information : IOPS MiB/s Average min max 00:12:25.216 PCIE (0000:00:10.0) NSID 1 from core 0: 5181.89 20.24 3085.81 1092.82 8204.10 00:12:25.216 PCIE (0000:00:11.0) NSID 1 from core 0: 5181.89 20.24 3087.05 1128.76 8050.17 00:12:25.216 PCIE (0000:00:13.0) NSID 1 from core 0: 5181.89 20.24 3086.96 1109.31 8266.79 00:12:25.216 PCIE (0000:00:12.0) NSID 1 from core 0: 5181.89 20.24 3086.84 1124.79 8151.88 00:12:25.216 PCIE (0000:00:12.0) NSID 2 from core 0: 5181.89 20.24 3086.88 1107.38 7969.46 00:12:25.216 PCIE (0000:00:12.0) NSID 3 from core 0: 5181.89 20.24 3086.79 1141.24 8027.33 00:12:25.216 ======================================================== 00:12:25.216 Total : 31091.34 121.45 3086.72 1092.82 8266.79 00:12:25.216 00:12:25.216 Initializing NVMe Controllers 00:12:25.216 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:25.216 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:25.216 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:25.216 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:25.216 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:25.216 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:25.216 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:25.216 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:25.216 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:25.216 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:25.216 Initialization complete. Launching workers. 00:12:25.216 ======================================================== 00:12:25.216 Latency(us) 00:12:25.216 Device Information : IOPS MiB/s Average min max 00:12:25.216 PCIE (0000:00:10.0) NSID 1 from core 1: 5044.79 19.71 3169.40 1024.34 9289.92 00:12:25.216 PCIE (0000:00:11.0) NSID 1 from core 1: 5044.79 19.71 3170.64 1043.94 9163.43 00:12:25.216 PCIE (0000:00:13.0) NSID 1 from core 1: 5044.79 19.71 3170.40 1060.12 9024.93 00:12:25.216 PCIE (0000:00:12.0) NSID 1 from core 1: 5044.79 19.71 3170.16 1066.19 9195.86 00:12:25.216 PCIE (0000:00:12.0) NSID 2 from core 1: 5044.79 19.71 3169.88 1021.84 9190.96 00:12:25.216 PCIE (0000:00:12.0) NSID 3 from core 1: 5044.79 19.71 3169.64 954.79 7971.18 00:12:25.216 ======================================================== 00:12:25.216 Total : 30268.76 118.24 3170.02 954.79 9289.92 00:12:25.216 00:12:27.118 Initializing NVMe Controllers 00:12:27.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:27.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:27.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:27.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:27.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:27.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:27.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:27.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:27.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:27.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:27.118 Initialization complete. Launching workers. 00:12:27.118 ======================================================== 00:12:27.118 Latency(us) 00:12:27.118 Device Information : IOPS MiB/s Average min max 00:12:27.118 PCIE (0000:00:10.0) NSID 1 from core 2: 3613.31 14.11 4425.85 1029.99 14386.95 00:12:27.118 PCIE (0000:00:11.0) NSID 1 from core 2: 3613.31 14.11 4426.55 1012.11 19600.93 00:12:27.118 PCIE (0000:00:13.0) NSID 1 from core 2: 3613.31 14.11 4426.96 1052.32 15390.67 00:12:27.118 PCIE (0000:00:12.0) NSID 1 from core 2: 3613.31 14.11 4426.77 1048.19 14935.01 00:12:27.118 PCIE (0000:00:12.0) NSID 2 from core 2: 3613.31 14.11 4426.19 1018.58 15505.99 00:12:27.118 PCIE (0000:00:12.0) NSID 3 from core 2: 3613.31 14.11 4425.77 826.01 15584.23 00:12:27.118 ======================================================== 00:12:27.118 Total : 21679.89 84.69 4426.35 826.01 19600.93 00:12:27.118 00:12:27.118 19:34:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 71120 00:12:27.118 19:34:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 71121 00:12:27.118 ************************************ 00:12:27.118 END TEST nvme_multi_secondary 00:12:27.118 ************************************ 00:12:27.118 00:12:27.118 real 0m11.312s 00:12:27.118 user 0m18.724s 00:12:27.118 sys 0m1.118s 00:12:27.118 19:34:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.118 19:34:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:27.376 19:34:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:27.376 19:34:17 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:27.376 19:34:17 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:27.376 19:34:17 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/70052 ]] 00:12:27.376 19:34:17 nvme -- common/autotest_common.sh@1088 -- # kill 70052 00:12:27.376 19:34:17 nvme -- common/autotest_common.sh@1089 -- # wait 70052 00:12:27.376 [2024-07-15 19:34:17.954193] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.954272] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.954298] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.954323] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.958184] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.958245] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.958269] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.958309] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.962139] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.962195] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.962218] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.962241] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.966165] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.966226] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.966249] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.376 [2024-07-15 19:34:17.966273] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70993) is not found. Dropping the request. 00:12:27.634 [2024-07-15 19:34:18.266337] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:12:27.634 19:34:18 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:12:27.634 19:34:18 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:12:27.634 19:34:18 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:27.634 19:34:18 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:27.634 19:34:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.634 19:34:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 ************************************ 00:12:27.634 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:27.634 ************************************ 00:12:27.634 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:27.634 * Looking for test storage... 00:12:27.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:27.635 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71286 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71286 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 71286 ']' 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.926 19:34:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 [2024-07-15 19:34:18.594594] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:12:27.926 [2024-07-15 19:34:18.595225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71286 ] 00:12:28.187 [2024-07-15 19:34:18.787760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.445 [2024-07-15 19:34:19.087976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.445 [2024-07-15 19:34:19.087995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.445 [2024-07-15 19:34:19.088116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.445 [2024-07-15 19:34:19.088136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.381 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.381 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:12:29.381 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:29.381 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.381 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 nvme0n1 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Upb1l.txt 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 true 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721072060 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71310 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:29.639 19:34:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:31.540 [2024-07-15 19:34:22.266916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:31.540 [2024-07-15 19:34:22.267327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:31.540 [2024-07-15 19:34:22.267365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:31.540 [2024-07-15 19:34:22.267388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.540 [2024-07-15 19:34:22.269246] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.540 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71310 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71310 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71310 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:31.540 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Upb1l.txt 00:12:31.799 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:31.799 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:31.799 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Upb1l.txt 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71286 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 71286 ']' 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 71286 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71286 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.800 killing process with pid 71286 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71286' 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 71286 00:12:31.800 19:34:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 71286 00:12:35.080 19:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:35.080 19:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:35.080 00:12:35.080 real 0m7.107s 00:12:35.080 user 0m24.322s 00:12:35.080 sys 0m0.720s 00:12:35.080 19:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.080 ************************************ 00:12:35.080 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:35.080 19:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:35.080 ************************************ 00:12:35.080 19:34:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:35.080 19:34:25 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:35.080 19:34:25 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:35.080 19:34:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.080 19:34:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.080 19:34:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.080 ************************************ 00:12:35.080 START TEST nvme_fio 00:12:35.080 ************************************ 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:35.080 19:34:25 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:35.080 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:35.081 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:35.081 19:34:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:35.337 19:34:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:35.337 19:34:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:35.337 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:35.337 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:35.337 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:35.337 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:35.338 19:34:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:35.602 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:35.602 fio-3.35 00:12:35.602 Starting 1 thread 00:12:40.873 00:12:40.873 test: (groupid=0, jobs=1): err= 0: pid=71467: Mon Jul 15 19:34:31 2024 00:12:40.873 read: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(103MiB/2001msec) 00:12:40.873 slat (usec): min=4, max=108, avg= 8.16, stdev= 4.16 00:12:40.873 clat (usec): min=302, max=39434, avg=4791.34, stdev=1825.23 00:12:40.873 lat (usec): min=309, max=39439, avg=4799.50, stdev=1827.07 00:12:40.873 clat percentiles (usec): 00:12:40.874 | 1.00th=[ 1876], 5.00th=[ 2737], 10.00th=[ 3097], 20.00th=[ 3261], 00:12:40.874 | 30.00th=[ 3621], 40.00th=[ 4113], 50.00th=[ 4686], 60.00th=[ 5342], 00:12:40.874 | 70.00th=[ 5735], 80.00th=[ 5997], 90.00th=[ 6456], 95.00th=[ 7439], 00:12:40.874 | 99.00th=[ 8848], 99.50th=[10028], 99.90th=[34866], 99.95th=[34866], 00:12:40.874 | 99.99th=[34866] 00:12:40.874 bw ( KiB/s): min=47760, max=56544, per=96.72%, avg=50996.67, stdev=4826.39, samples=3 00:12:40.874 iops : min=11940, max=14136, avg=12749.67, stdev=1206.24, samples=3 00:12:40.874 write: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(103MiB/2001msec); 0 zone resets 00:12:40.874 slat (nsec): min=4774, max=94254, avg=8252.08, stdev=4129.24 00:12:40.874 clat (usec): min=349, max=39302, avg=4886.18, stdev=2508.57 00:12:40.874 lat (usec): min=356, max=39308, avg=4894.44, stdev=2509.79 00:12:40.874 clat percentiles (usec): 00:12:40.874 | 1.00th=[ 1909], 5.00th=[ 2737], 10.00th=[ 3097], 20.00th=[ 3261], 00:12:40.874 | 30.00th=[ 3621], 40.00th=[ 4113], 50.00th=[ 4686], 60.00th=[ 5342], 00:12:40.874 | 70.00th=[ 5735], 80.00th=[ 5997], 90.00th=[ 6521], 95.00th=[ 7701], 00:12:40.874 | 99.00th=[ 9372], 99.50th=[11600], 99.90th=[38536], 99.95th=[38536], 00:12:40.874 | 99.99th=[39060] 00:12:40.874 bw ( KiB/s): min=47088, max=56888, per=96.84%, avg=51052.33, stdev=5161.05, samples=3 00:12:40.874 iops : min=11772, max=14222, avg=12763.00, stdev=1290.31, samples=3 00:12:40.874 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:40.874 lat (msec) : 2=1.28%, 4=36.40%, 10=61.63%, 20=0.42%, 50=0.24% 00:12:40.874 cpu : usr=98.65%, sys=0.10%, ctx=3, majf=0, minf=605 00:12:40.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:40.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:40.874 issued rwts: total=26376,26373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:40.874 00:12:40.874 Run status group 0 (all jobs): 00:12:40.874 READ: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=103MiB (108MB), run=2001-2001msec 00:12:40.874 WRITE: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=103MiB (108MB), run=2001-2001msec 00:12:41.132 ----------------------------------------------------- 00:12:41.132 Suppressions used: 00:12:41.132 count bytes template 00:12:41.132 1 32 /usr/src/fio/parse.c 00:12:41.132 1 8 libtcmalloc_minimal.so 00:12:41.132 ----------------------------------------------------- 00:12:41.132 00:12:41.132 19:34:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:41.132 19:34:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:41.132 19:34:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:41.132 19:34:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:41.389 19:34:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:41.389 19:34:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:41.645 19:34:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:41.645 19:34:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:41.645 19:34:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:41.902 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:41.902 fio-3.35 00:12:41.902 Starting 1 thread 00:12:45.290 00:12:45.290 test: (groupid=0, jobs=1): err= 0: pid=71528: Mon Jul 15 19:34:35 2024 00:12:45.290 read: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(139MiB/2001msec) 00:12:45.290 slat (usec): min=4, max=109, avg= 5.99, stdev= 1.79 00:12:45.290 clat (usec): min=247, max=9506, avg=3589.37, stdev=694.58 00:12:45.290 lat (usec): min=252, max=9546, avg=3595.36, stdev=695.53 00:12:45.290 clat percentiles (usec): 00:12:45.290 | 1.00th=[ 2540], 5.00th=[ 2933], 10.00th=[ 3064], 20.00th=[ 3163], 00:12:45.290 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3326], 60.00th=[ 3392], 00:12:45.290 | 70.00th=[ 3654], 80.00th=[ 4178], 90.00th=[ 4490], 95.00th=[ 5014], 00:12:45.290 | 99.00th=[ 5800], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 7832], 00:12:45.290 | 99.99th=[ 9241] 00:12:45.290 bw ( KiB/s): min=69792, max=77840, per=100.00%, avg=72746.67, stdev=4429.79, samples=3 00:12:45.290 iops : min=17448, max=19460, avg=18186.67, stdev=1107.45, samples=3 00:12:45.290 write: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(139MiB/2001msec); 0 zone resets 00:12:45.290 slat (nsec): min=4735, max=42341, avg=6101.16, stdev=1686.16 00:12:45.290 clat (usec): min=226, max=9320, avg=3599.81, stdev=699.38 00:12:45.290 lat (usec): min=231, max=9335, avg=3605.91, stdev=700.34 00:12:45.290 clat percentiles (usec): 00:12:45.290 | 1.00th=[ 2540], 5.00th=[ 2933], 10.00th=[ 3064], 20.00th=[ 3163], 00:12:45.290 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3326], 60.00th=[ 3392], 00:12:45.290 | 70.00th=[ 3654], 80.00th=[ 4178], 90.00th=[ 4490], 95.00th=[ 5080], 00:12:45.290 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7701], 99.95th=[ 7898], 00:12:45.290 | 99.99th=[ 9110] 00:12:45.290 bw ( KiB/s): min=69760, max=77600, per=100.00%, avg=72669.33, stdev=4293.10, samples=3 00:12:45.290 iops : min=17440, max=19400, avg=18167.33, stdev=1073.28, samples=3 00:12:45.290 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:12:45.290 lat (msec) : 2=0.09%, 4=76.57%, 10=23.29% 00:12:45.290 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=606 00:12:45.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:45.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:45.290 issued rwts: total=35506,35497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:45.290 00:12:45.290 Run status group 0 (all jobs): 00:12:45.290 READ: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=139MiB (145MB), run=2001-2001msec 00:12:45.290 WRITE: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=139MiB (145MB), run=2001-2001msec 00:12:45.548 ----------------------------------------------------- 00:12:45.548 Suppressions used: 00:12:45.548 count bytes template 00:12:45.548 1 32 /usr/src/fio/parse.c 00:12:45.548 1 8 libtcmalloc_minimal.so 00:12:45.548 ----------------------------------------------------- 00:12:45.548 00:12:45.548 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:45.548 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:45.548 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:45.548 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:45.806 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:45.806 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:46.064 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:46.064 19:34:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:46.064 19:34:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:46.321 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:46.321 fio-3.35 00:12:46.321 Starting 1 thread 00:12:50.506 00:12:50.506 test: (groupid=0, jobs=1): err= 0: pid=71589: Mon Jul 15 19:34:40 2024 00:12:50.506 read: IOPS=15.4k, BW=60.1MiB/s (63.0MB/s)(120MiB/2001msec) 00:12:50.506 slat (nsec): min=4620, max=51958, avg=7252.31, stdev=4064.01 00:12:50.506 clat (usec): min=305, max=14110, avg=4133.66, stdev=1266.37 00:12:50.506 lat (usec): min=312, max=14156, avg=4140.91, stdev=1269.52 00:12:50.506 clat percentiles (usec): 00:12:50.506 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2966], 20.00th=[ 3163], 00:12:50.506 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 4080], 60.00th=[ 4293], 00:12:50.506 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 7635], 00:12:50.506 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[ 8979], 99.95th=[11207], 00:12:50.506 | 99.99th=[13698] 00:12:50.506 bw ( KiB/s): min=45272, max=69984, per=96.55%, avg=59418.67, stdev=12739.32, samples=3 00:12:50.506 iops : min=11318, max=17496, avg=14854.67, stdev=3184.83, samples=3 00:12:50.506 write: IOPS=15.4k, BW=60.1MiB/s (63.1MB/s)(120MiB/2001msec); 0 zone resets 00:12:50.506 slat (nsec): min=4725, max=65633, avg=7438.29, stdev=4253.41 00:12:50.506 clat (usec): min=392, max=13827, avg=4148.65, stdev=1279.46 00:12:50.506 lat (usec): min=398, max=13853, avg=4156.09, stdev=1282.79 00:12:50.506 clat percentiles (usec): 00:12:50.506 | 1.00th=[ 2180], 5.00th=[ 2671], 10.00th=[ 2966], 20.00th=[ 3163], 00:12:50.506 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 4113], 60.00th=[ 4293], 00:12:50.506 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5211], 95.00th=[ 7701], 00:12:50.506 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[ 9372], 99.95th=[11469], 00:12:50.506 | 99.99th=[13304] 00:12:50.506 bw ( KiB/s): min=45496, max=69392, per=96.17%, avg=59229.33, stdev=12341.68, samples=3 00:12:50.506 iops : min=11374, max=17348, avg=14807.33, stdev=3085.42, samples=3 00:12:50.506 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:50.506 lat (msec) : 2=0.54%, 4=46.41%, 10=52.95%, 20=0.07% 00:12:50.506 cpu : usr=98.95%, sys=0.15%, ctx=4, majf=0, minf=606 00:12:50.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:50.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.506 issued rwts: total=30786,30808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.506 00:12:50.506 Run status group 0 (all jobs): 00:12:50.506 READ: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=120MiB (126MB), run=2001-2001msec 00:12:50.506 WRITE: bw=60.1MiB/s (63.1MB/s), 60.1MiB/s-60.1MiB/s (63.1MB/s-63.1MB/s), io=120MiB (126MB), run=2001-2001msec 00:12:50.506 ----------------------------------------------------- 00:12:50.506 Suppressions used: 00:12:50.506 count bytes template 00:12:50.506 1 32 /usr/src/fio/parse.c 00:12:50.506 1 8 libtcmalloc_minimal.so 00:12:50.506 ----------------------------------------------------- 00:12:50.506 00:12:50.506 19:34:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:50.506 19:34:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:50.506 19:34:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:50.506 19:34:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:50.506 19:34:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:50.506 19:34:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:50.764 19:34:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:50.764 19:34:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:50.764 19:34:41 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:51.021 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:51.021 fio-3.35 00:12:51.021 Starting 1 thread 00:12:56.295 00:12:56.295 test: (groupid=0, jobs=1): err= 0: pid=71655: Mon Jul 15 19:34:46 2024 00:12:56.295 read: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(140MiB/2001msec) 00:12:56.295 slat (nsec): min=4638, max=51241, avg=5899.81, stdev=1681.24 00:12:56.295 clat (usec): min=598, max=10106, avg=3565.86, stdev=641.51 00:12:56.295 lat (usec): min=607, max=10141, avg=3571.76, stdev=642.35 00:12:56.295 clat percentiles (usec): 00:12:56.295 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 3064], 20.00th=[ 3163], 00:12:56.295 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3425], 00:12:56.295 | 70.00th=[ 3589], 80.00th=[ 4015], 90.00th=[ 4293], 95.00th=[ 4686], 00:12:56.295 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 8094], 99.95th=[ 8291], 00:12:56.295 | 99.99th=[ 9896] 00:12:56.295 bw ( KiB/s): min=59544, max=80000, per=100.00%, avg=71704.00, stdev=10761.50, samples=3 00:12:56.295 iops : min=14886, max=20000, avg=17926.00, stdev=2690.37, samples=3 00:12:56.295 write: IOPS=17.9k, BW=69.8MiB/s (73.2MB/s)(140MiB/2001msec); 0 zone resets 00:12:56.295 slat (nsec): min=4740, max=50666, avg=6033.30, stdev=1707.24 00:12:56.295 clat (usec): min=450, max=9951, avg=3571.35, stdev=644.89 00:12:56.295 lat (usec): min=459, max=9964, avg=3577.38, stdev=645.71 00:12:56.295 clat percentiles (usec): 00:12:56.295 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3195], 00:12:56.295 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3458], 00:12:56.295 | 70.00th=[ 3589], 80.00th=[ 4047], 90.00th=[ 4359], 95.00th=[ 4752], 00:12:56.295 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 8029], 99.95th=[ 8291], 00:12:56.295 | 99.99th=[ 9634] 00:12:56.295 bw ( KiB/s): min=59840, max=79888, per=100.00%, avg=71645.33, stdev=10488.09, samples=3 00:12:56.295 iops : min=14960, max=19972, avg=17911.33, stdev=2622.02, samples=3 00:12:56.295 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:56.295 lat (msec) : 2=0.12%, 4=78.97%, 10=20.89%, 20=0.01% 00:12:56.295 cpu : usr=99.15%, sys=0.15%, ctx=5, majf=0, minf=603 00:12:56.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:56.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.295 issued rwts: total=35767,35764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.295 00:12:56.295 Run status group 0 (all jobs): 00:12:56.295 READ: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=140MiB (147MB), run=2001-2001msec 00:12:56.295 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=140MiB (146MB), run=2001-2001msec 00:12:56.295 ----------------------------------------------------- 00:12:56.295 Suppressions used: 00:12:56.295 count bytes template 00:12:56.295 1 32 /usr/src/fio/parse.c 00:12:56.295 1 8 libtcmalloc_minimal.so 00:12:56.295 ----------------------------------------------------- 00:12:56.295 00:12:56.295 19:34:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:56.295 19:34:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:56.295 00:12:56.295 real 0m20.952s 00:12:56.295 user 0m15.235s 00:12:56.295 sys 0m7.719s 00:12:56.295 19:34:46 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.295 19:34:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:56.295 ************************************ 00:12:56.295 END TEST nvme_fio 00:12:56.295 ************************************ 00:12:56.295 19:34:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:56.295 00:12:56.295 real 1m37.469s 00:12:56.295 user 3m48.551s 00:12:56.295 sys 0m25.800s 00:12:56.295 19:34:46 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.295 19:34:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.295 ************************************ 00:12:56.295 END TEST nvme 00:12:56.295 ************************************ 00:12:56.295 19:34:46 -- common/autotest_common.sh@1142 -- # return 0 00:12:56.295 19:34:46 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:12:56.295 19:34:46 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:56.295 19:34:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:56.295 19:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.295 19:34:46 -- common/autotest_common.sh@10 -- # set +x 00:12:56.295 ************************************ 00:12:56.295 START TEST nvme_scc 00:12:56.295 ************************************ 00:12:56.295 19:34:46 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:56.295 * Looking for test storage... 00:12:56.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:56.295 19:34:46 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.295 19:34:46 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.295 19:34:46 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.295 19:34:46 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.295 19:34:46 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.295 19:34:46 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.295 19:34:46 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.295 19:34:46 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:56.295 19:34:46 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:56.295 19:34:46 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:56.295 19:34:46 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.295 19:34:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:56.295 19:34:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:56.295 19:34:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:56.295 19:34:46 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:56.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.295 Waiting for block devices as requested 00:12:56.295 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.553 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.553 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.553 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:01.833 19:34:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:01.833 19:34:52 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:01.833 19:34:52 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:01.833 19:34:52 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.833 19:34:52 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:01.833 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.835 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.836 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:01.837 19:34:52 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:01.837 19:34:52 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:01.837 19:34:52 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.837 19:34:52 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.837 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.838 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.103 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.104 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:02.105 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.106 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:02.107 19:34:52 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:02.107 19:34:52 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:02.107 19:34:52 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:02.107 19:34:52 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:02.107 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.108 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.109 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:02.110 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:02.111 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:02.112 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.113 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:02.114 19:34:52 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:02.114 19:34:52 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:02.114 19:34:52 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:02.114 19:34:52 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.114 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:02.115 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.116 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:13:02.117 19:34:52 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:13:02.118 19:34:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:02.118 19:34:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:02.118 19:34:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:02.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:03.249 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.249 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.249 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.508 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.508 19:34:54 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:03.508 19:34:54 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:03.508 19:34:54 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.508 19:34:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:03.508 ************************************ 00:13:03.508 START TEST nvme_simple_copy 00:13:03.508 ************************************ 00:13:03.508 19:34:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:03.766 Initializing NVMe Controllers 00:13:03.766 Attaching to 0000:00:10.0 00:13:03.766 Controller supports SCC. Attached to 0000:00:10.0 00:13:03.766 Namespace ID: 1 size: 6GB 00:13:03.766 Initialization complete. 00:13:03.766 00:13:03.766 Controller QEMU NVMe Ctrl (12340 ) 00:13:03.766 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:03.766 Namespace Block Size:4096 00:13:03.766 Writing LBAs 0 to 63 with Random Data 00:13:03.766 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:03.766 LBAs matching Written Data: 64 00:13:03.766 00:13:03.766 real 0m0.349s 00:13:03.766 user 0m0.135s 00:13:03.766 sys 0m0.111s 00:13:03.766 19:34:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.766 19:34:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:03.766 ************************************ 00:13:03.766 END TEST nvme_simple_copy 00:13:03.766 ************************************ 00:13:04.024 19:34:54 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:13:04.024 00:13:04.024 real 0m8.082s 00:13:04.024 user 0m1.157s 00:13:04.024 sys 0m1.820s 00:13:04.024 19:34:54 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.024 19:34:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:04.024 ************************************ 00:13:04.024 END TEST nvme_scc 00:13:04.024 ************************************ 00:13:04.024 19:34:54 -- common/autotest_common.sh@1142 -- # return 0 00:13:04.024 19:34:54 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:04.024 19:34:54 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:13:04.024 19:34:54 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:13:04.024 19:34:54 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:13:04.024 19:34:54 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:04.024 19:34:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:04.024 19:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.024 19:34:54 -- common/autotest_common.sh@10 -- # set +x 00:13:04.024 ************************************ 00:13:04.024 START TEST nvme_fdp 00:13:04.024 ************************************ 00:13:04.024 19:34:54 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:13:04.024 * Looking for test storage... 00:13:04.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:04.024 19:34:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:04.024 19:34:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:04.024 19:34:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:04.024 19:34:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:04.024 19:34:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.025 19:34:54 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.025 19:34:54 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.025 19:34:54 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.025 19:34:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.025 19:34:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.025 19:34:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.025 19:34:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:04.025 19:34:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:04.025 19:34:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:04.025 19:34:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:04.025 19:34:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:04.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:04.569 Waiting for block devices as requested 00:13:04.569 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.569 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.569 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.829 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.105 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:10.105 19:35:00 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:10.105 19:35:00 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:10.106 19:35:00 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:10.106 19:35:00 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:10.106 19:35:00 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:10.106 19:35:00 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.106 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:10.107 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:10.108 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.109 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:10.110 19:35:00 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:10.110 19:35:00 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:10.110 19:35:00 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:10.110 19:35:00 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.110 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.111 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:10.112 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.113 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:10.114 19:35:00 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:10.114 19:35:00 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:10.114 19:35:00 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:10.114 19:35:00 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.114 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:10.115 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.116 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.117 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.118 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.119 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.120 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.121 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:10.122 19:35:00 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:10.122 19:35:00 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:10.122 19:35:00 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:10.122 19:35:00 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:10.122 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:10.123 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:10.124 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.382 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.382 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.382 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:10.382 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:10.382 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:10.383 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:13:10.384 19:35:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:10.384 19:35:00 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:10.384 19:35:00 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:10.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:11.519 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.519 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.519 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.778 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.778 19:35:02 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:11.778 19:35:02 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:11.778 19:35:02 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.778 19:35:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:11.778 ************************************ 00:13:11.778 START TEST nvme_flexible_data_placement 00:13:11.778 ************************************ 00:13:11.778 19:35:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:12.037 Initializing NVMe Controllers 00:13:12.037 Attaching to 0000:00:13.0 00:13:12.037 Controller supports FDP Attached to 0000:00:13.0 00:13:12.037 Namespace ID: 1 Endurance Group ID: 1 00:13:12.037 Initialization complete. 00:13:12.037 00:13:12.037 ================================== 00:13:12.037 == FDP tests for Namespace: #01 == 00:13:12.037 ================================== 00:13:12.037 00:13:12.037 Get Feature: FDP: 00:13:12.037 ================= 00:13:12.037 Enabled: Yes 00:13:12.037 FDP configuration Index: 0 00:13:12.037 00:13:12.037 FDP configurations log page 00:13:12.037 =========================== 00:13:12.037 Number of FDP configurations: 1 00:13:12.037 Version: 0 00:13:12.037 Size: 112 00:13:12.037 FDP Configuration Descriptor: 0 00:13:12.037 Descriptor Size: 96 00:13:12.037 Reclaim Group Identifier format: 2 00:13:12.037 FDP Volatile Write Cache: Not Present 00:13:12.037 FDP Configuration: Valid 00:13:12.037 Vendor Specific Size: 0 00:13:12.037 Number of Reclaim Groups: 2 00:13:12.037 Number of Recalim Unit Handles: 8 00:13:12.037 Max Placement Identifiers: 128 00:13:12.037 Number of Namespaces Suppprted: 256 00:13:12.037 Reclaim unit Nominal Size: 6000000 bytes 00:13:12.037 Estimated Reclaim Unit Time Limit: Not Reported 00:13:12.037 RUH Desc #000: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #001: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #002: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #003: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #004: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #005: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #006: RUH Type: Initially Isolated 00:13:12.037 RUH Desc #007: RUH Type: Initially Isolated 00:13:12.037 00:13:12.037 FDP reclaim unit handle usage log page 00:13:12.037 ====================================== 00:13:12.037 Number of Reclaim Unit Handles: 8 00:13:12.037 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:12.037 RUH Usage Desc #001: RUH Attributes: Unused 00:13:12.037 RUH Usage Desc #002: RUH Attributes: Unused 00:13:12.037 RUH Usage Desc #003: RUH Attributes: Unused 00:13:12.037 RUH Usage Desc #004: RUH Attributes: Unused 00:13:12.037 RUH Usage Desc #005: RUH Attributes: Unused 00:13:12.037 RUH Usage Desc #006: RUH Attributes: Unused 00:13:12.037 RUH Usage Desc #007: RUH Attributes: Unused 00:13:12.037 00:13:12.037 FDP statistics log page 00:13:12.037 ======================= 00:13:12.037 Host bytes with metadata written: 794488832 00:13:12.037 Media bytes with metadata written: 794595328 00:13:12.037 Media bytes erased: 0 00:13:12.037 00:13:12.037 FDP Reclaim unit handle status 00:13:12.037 ============================== 00:13:12.037 Number of RUHS descriptors: 2 00:13:12.037 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000a51 00:13:12.037 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:12.037 00:13:12.037 FDP write on placement id: 0 success 00:13:12.037 00:13:12.037 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:12.037 00:13:12.037 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:12.037 00:13:12.037 Get Feature: FDP Events for Placement handle: #0 00:13:12.037 ======================== 00:13:12.037 Number of FDP Events: 6 00:13:12.037 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:12.037 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:12.037 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:12.037 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:12.037 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:12.037 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:12.037 00:13:12.037 FDP events log page 00:13:12.037 =================== 00:13:12.037 Number of FDP events: 1 00:13:12.037 FDP Event #0: 00:13:12.037 Event Type: RU Not Written to Capacity 00:13:12.037 Placement Identifier: Valid 00:13:12.037 NSID: Valid 00:13:12.037 Location: Valid 00:13:12.037 Placement Identifier: 0 00:13:12.037 Event Timestamp: c 00:13:12.037 Namespace Identifier: 1 00:13:12.037 Reclaim Group Identifier: 0 00:13:12.037 Reclaim Unit Handle Identifier: 0 00:13:12.037 00:13:12.037 FDP test passed 00:13:12.037 00:13:12.037 real 0m0.364s 00:13:12.037 user 0m0.128s 00:13:12.037 sys 0m0.133s 00:13:12.037 19:35:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.037 ************************************ 00:13:12.037 19:35:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:12.037 END TEST nvme_flexible_data_placement 00:13:12.037 ************************************ 00:13:12.296 19:35:02 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.296 00:13:12.296 real 0m8.250s 00:13:12.296 user 0m1.311s 00:13:12.296 sys 0m1.946s 00:13:12.296 19:35:02 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.296 19:35:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:12.296 ************************************ 00:13:12.296 END TEST nvme_fdp 00:13:12.296 ************************************ 00:13:12.296 19:35:02 -- common/autotest_common.sh@1142 -- # return 0 00:13:12.296 19:35:02 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:13:12.296 19:35:02 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:12.296 19:35:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:12.296 19:35:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.296 19:35:02 -- common/autotest_common.sh@10 -- # set +x 00:13:12.296 ************************************ 00:13:12.296 START TEST nvme_rpc 00:13:12.296 ************************************ 00:13:12.296 19:35:02 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:12.296 * Looking for test storage... 00:13:12.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:12.296 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.296 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:12.296 19:35:03 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:13:12.555 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:12.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.555 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72988 00:13:12.555 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:12.555 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:12.555 19:35:03 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72988 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72988 ']' 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.555 19:35:03 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.555 [2024-07-15 19:35:03.200705] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:12.555 [2024-07-15 19:35:03.200895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72988 ] 00:13:12.814 [2024-07-15 19:35:03.373941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:13.072 [2024-07-15 19:35:03.713365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.072 [2024-07-15 19:35:03.713366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.008 19:35:04 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.008 19:35:04 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:14.008 19:35:04 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:14.267 Nvme0n1 00:13:14.267 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:14.267 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:14.526 request: 00:13:14.526 { 00:13:14.526 "bdev_name": "Nvme0n1", 00:13:14.526 "filename": "non_existing_file", 00:13:14.526 "method": "bdev_nvme_apply_firmware", 00:13:14.526 "req_id": 1 00:13:14.526 } 00:13:14.526 Got JSON-RPC error response 00:13:14.526 response: 00:13:14.526 { 00:13:14.526 "code": -32603, 00:13:14.526 "message": "open file failed." 00:13:14.526 } 00:13:14.526 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:14.526 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:14.526 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:14.784 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:14.784 19:35:05 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72988 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72988 ']' 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72988 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72988 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:14.784 killing process with pid 72988 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72988' 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72988 00:13:14.784 19:35:05 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72988 00:13:17.348 00:13:17.348 real 0m5.156s 00:13:17.348 user 0m9.456s 00:13:17.348 sys 0m0.726s 00:13:17.348 19:35:08 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.348 ************************************ 00:13:17.348 END TEST nvme_rpc 00:13:17.348 ************************************ 00:13:17.348 19:35:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.348 19:35:08 -- common/autotest_common.sh@1142 -- # return 0 00:13:17.348 19:35:08 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:17.348 19:35:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:17.348 19:35:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.348 19:35:08 -- common/autotest_common.sh@10 -- # set +x 00:13:17.348 ************************************ 00:13:17.348 START TEST nvme_rpc_timeouts 00:13:17.348 ************************************ 00:13:17.348 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:17.606 * Looking for test storage... 00:13:17.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_73071 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_73071 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=73095 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 73095 00:13:17.606 19:35:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:17.606 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 73095 ']' 00:13:17.606 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.606 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.606 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.606 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.606 19:35:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:17.606 [2024-07-15 19:35:08.365414] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:17.606 [2024-07-15 19:35:08.365601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73095 ] 00:13:17.864 [2024-07-15 19:35:08.555900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:18.123 [2024-07-15 19:35:08.878126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.123 [2024-07-15 19:35:08.878153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.494 Checking default timeout settings: 00:13:19.495 19:35:09 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.495 19:35:09 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:13:19.495 19:35:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:19.495 19:35:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:19.495 Making settings changes with rpc: 00:13:19.495 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:19.495 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:19.753 Check default vs. modified settings: 00:13:19.753 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:19.753 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:20.318 Setting action_on_timeout is changed as expected. 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:20.318 Setting timeout_us is changed as expected. 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:20.318 Setting timeout_admin_us is changed as expected. 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_73071 /tmp/settings_modified_73071 00:13:20.318 19:35:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 73095 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 73095 ']' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 73095 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73095 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73095' 00:13:20.318 killing process with pid 73095 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 73095 00:13:20.318 19:35:10 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 73095 00:13:23.616 RPC TIMEOUT SETTING TEST PASSED. 00:13:23.616 19:35:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:23.616 00:13:23.616 real 0m5.738s 00:13:23.616 user 0m10.528s 00:13:23.616 sys 0m0.767s 00:13:23.616 19:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.616 19:35:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:23.616 ************************************ 00:13:23.616 END TEST nvme_rpc_timeouts 00:13:23.616 ************************************ 00:13:23.616 19:35:13 -- common/autotest_common.sh@1142 -- # return 0 00:13:23.616 19:35:13 -- spdk/autotest.sh@243 -- # uname -s 00:13:23.616 19:35:13 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:13:23.616 19:35:13 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:23.616 19:35:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:23.616 19:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.616 19:35:13 -- common/autotest_common.sh@10 -- # set +x 00:13:23.616 ************************************ 00:13:23.616 START TEST sw_hotplug 00:13:23.616 ************************************ 00:13:23.616 19:35:13 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:23.616 * Looking for test storage... 00:13:23.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:23.616 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:23.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.875 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:23.875 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:23.875 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:23.875 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:23.875 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:23.875 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:23.875 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:23.875 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@230 -- # local class 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:23.875 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:13:24.134 19:35:14 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:24.134 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:24.134 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:24.134 19:35:14 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:24.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.651 Waiting for block devices as requested 00:13:24.651 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:24.910 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:24.910 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.170 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:30.441 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:30.441 19:35:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:30.441 19:35:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:30.699 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:30.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:30.699 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:30.956 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:31.214 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:31.473 19:35:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73976 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:31.473 19:35:22 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:31.473 19:35:22 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:31.473 19:35:22 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:31.473 19:35:22 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:31.473 19:35:22 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:31.473 19:35:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:31.731 Initializing NVMe Controllers 00:13:31.731 Attaching to 0000:00:10.0 00:13:31.731 Attaching to 0000:00:11.0 00:13:31.731 Attached to 0000:00:10.0 00:13:31.732 Attached to 0000:00:11.0 00:13:31.732 Initialization complete. Starting I/O... 00:13:31.732 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:31.732 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:31.732 00:13:32.668 QEMU NVMe Ctrl (12340 ): 890 I/Os completed (+890) 00:13:32.668 QEMU NVMe Ctrl (12341 ): 913 I/Os completed (+913) 00:13:32.668 00:13:34.040 QEMU NVMe Ctrl (12340 ): 1823 I/Os completed (+933) 00:13:34.040 QEMU NVMe Ctrl (12341 ): 1898 I/Os completed (+985) 00:13:34.040 00:13:35.002 QEMU NVMe Ctrl (12340 ): 3136 I/Os completed (+1313) 00:13:35.002 QEMU NVMe Ctrl (12341 ): 3237 I/Os completed (+1339) 00:13:35.002 00:13:35.953 QEMU NVMe Ctrl (12340 ): 4469 I/Os completed (+1333) 00:13:35.953 QEMU NVMe Ctrl (12341 ): 4660 I/Os completed (+1423) 00:13:35.953 00:13:36.885 QEMU NVMe Ctrl (12340 ): 5797 I/Os completed (+1328) 00:13:36.885 QEMU NVMe Ctrl (12341 ): 6046 I/Os completed (+1386) 00:13:36.885 00:13:37.462 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:37.462 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:37.462 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:37.462 [2024-07-15 19:35:28.188808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:37.462 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:37.462 [2024-07-15 19:35:28.191650] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.191736] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.191773] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.191841] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:37.462 [2024-07-15 19:35:28.196393] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.196477] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.196505] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.196537] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:37.462 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:37.462 [2024-07-15 19:35:28.231176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:37.462 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:37.462 [2024-07-15 19:35:28.233836] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.462 [2024-07-15 19:35:28.233909] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 [2024-07-15 19:35:28.233953] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 [2024-07-15 19:35:28.233989] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:37.463 [2024-07-15 19:35:28.237735] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 [2024-07-15 19:35:28.237807] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 [2024-07-15 19:35:28.237840] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 [2024-07-15 19:35:28.237865] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.463 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:37.463 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:37.720 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:37.720 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:37.720 Attaching to 0000:00:10.0 00:13:37.720 Attached to 0000:00:10.0 00:13:37.976 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:37.976 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.976 19:35:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:37.976 Attaching to 0000:00:11.0 00:13:37.976 Attached to 0000:00:11.0 00:13:38.907 QEMU NVMe Ctrl (12340 ): 1206 I/Os completed (+1206) 00:13:38.907 QEMU NVMe Ctrl (12341 ): 1087 I/Os completed (+1087) 00:13:38.907 00:13:39.839 QEMU NVMe Ctrl (12340 ): 2765 I/Os completed (+1559) 00:13:39.839 QEMU NVMe Ctrl (12341 ): 2691 I/Os completed (+1604) 00:13:39.839 00:13:40.793 QEMU NVMe Ctrl (12340 ): 4185 I/Os completed (+1420) 00:13:40.793 QEMU NVMe Ctrl (12341 ): 4131 I/Os completed (+1440) 00:13:40.793 00:13:41.728 QEMU NVMe Ctrl (12340 ): 5865 I/Os completed (+1680) 00:13:41.728 QEMU NVMe Ctrl (12341 ): 5826 I/Os completed (+1695) 00:13:41.728 00:13:42.661 QEMU NVMe Ctrl (12340 ): 7597 I/Os completed (+1732) 00:13:42.661 QEMU NVMe Ctrl (12341 ): 7560 I/Os completed (+1734) 00:13:42.661 00:13:44.041 QEMU NVMe Ctrl (12340 ): 9351 I/Os completed (+1754) 00:13:44.041 QEMU NVMe Ctrl (12341 ): 9359 I/Os completed (+1799) 00:13:44.041 00:13:44.976 QEMU NVMe Ctrl (12340 ): 11007 I/Os completed (+1656) 00:13:44.976 QEMU NVMe Ctrl (12341 ): 11023 I/Os completed (+1664) 00:13:44.976 00:13:45.912 QEMU NVMe Ctrl (12340 ): 12946 I/Os completed (+1939) 00:13:45.912 QEMU NVMe Ctrl (12341 ): 12959 I/Os completed (+1936) 00:13:45.912 00:13:46.847 QEMU NVMe Ctrl (12340 ): 14704 I/Os completed (+1758) 00:13:46.847 QEMU NVMe Ctrl (12341 ): 14813 I/Os completed (+1854) 00:13:46.847 00:13:47.778 QEMU NVMe Ctrl (12340 ): 16228 I/Os completed (+1524) 00:13:47.778 QEMU NVMe Ctrl (12341 ): 16821 I/Os completed (+2008) 00:13:47.779 00:13:48.721 QEMU NVMe Ctrl (12340 ): 17635 I/Os completed (+1407) 00:13:48.721 QEMU NVMe Ctrl (12341 ): 18409 I/Os completed (+1588) 00:13:48.721 00:13:49.656 QEMU NVMe Ctrl (12340 ): 19811 I/Os completed (+2176) 00:13:49.656 QEMU NVMe Ctrl (12341 ): 21128 I/Os completed (+2719) 00:13:49.656 00:13:49.915 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:49.915 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:49.915 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.915 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.915 [2024-07-15 19:35:40.615018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:49.915 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:49.915 [2024-07-15 19:35:40.616891] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.616947] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.616969] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.616997] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:49.915 [2024-07-15 19:35:40.620045] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.620095] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.620113] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.620132] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.915 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.915 [2024-07-15 19:35:40.647335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:49.915 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:49.915 [2024-07-15 19:35:40.649063] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.649118] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.649147] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.915 [2024-07-15 19:35:40.649168] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.916 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:49.916 [2024-07-15 19:35:40.651877] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.916 [2024-07-15 19:35:40.651922] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.916 [2024-07-15 19:35:40.651945] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.916 [2024-07-15 19:35:40.651966] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.916 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:49.916 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:50.174 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:50.175 Attaching to 0000:00:10.0 00:13:50.175 Attached to 0000:00:10.0 00:13:50.175 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:50.460 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:50.460 19:35:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:50.460 Attaching to 0000:00:11.0 00:13:50.460 Attached to 0000:00:11.0 00:13:50.740 QEMU NVMe Ctrl (12340 ): 1020 I/Os completed (+1020) 00:13:50.740 QEMU NVMe Ctrl (12341 ): 796 I/Os completed (+796) 00:13:50.740 00:13:51.676 QEMU NVMe Ctrl (12340 ): 2645 I/Os completed (+1625) 00:13:51.676 QEMU NVMe Ctrl (12341 ): 2443 I/Os completed (+1647) 00:13:51.676 00:13:53.051 QEMU NVMe Ctrl (12340 ): 4589 I/Os completed (+1944) 00:13:53.051 QEMU NVMe Ctrl (12341 ): 4389 I/Os completed (+1946) 00:13:53.051 00:13:53.985 QEMU NVMe Ctrl (12340 ): 6513 I/Os completed (+1924) 00:13:53.985 QEMU NVMe Ctrl (12341 ): 6315 I/Os completed (+1926) 00:13:53.985 00:13:54.919 QEMU NVMe Ctrl (12340 ): 8455 I/Os completed (+1942) 00:13:54.919 QEMU NVMe Ctrl (12341 ): 8308 I/Os completed (+1993) 00:13:54.919 00:13:55.853 QEMU NVMe Ctrl (12340 ): 10239 I/Os completed (+1784) 00:13:55.853 QEMU NVMe Ctrl (12341 ): 10093 I/Os completed (+1785) 00:13:55.853 00:13:56.789 QEMU NVMe Ctrl (12340 ): 12135 I/Os completed (+1896) 00:13:56.789 QEMU NVMe Ctrl (12341 ): 11989 I/Os completed (+1896) 00:13:56.789 00:13:57.726 QEMU NVMe Ctrl (12340 ): 14155 I/Os completed (+2020) 00:13:57.726 QEMU NVMe Ctrl (12341 ): 14014 I/Os completed (+2025) 00:13:57.726 00:13:58.660 QEMU NVMe Ctrl (12340 ): 16058 I/Os completed (+1903) 00:13:58.660 QEMU NVMe Ctrl (12341 ): 15926 I/Os completed (+1912) 00:13:58.660 00:14:00.073 QEMU NVMe Ctrl (12340 ): 17966 I/Os completed (+1908) 00:14:00.073 QEMU NVMe Ctrl (12341 ): 17835 I/Os completed (+1909) 00:14:00.073 00:14:01.007 QEMU NVMe Ctrl (12340 ): 19870 I/Os completed (+1904) 00:14:01.007 QEMU NVMe Ctrl (12341 ): 19744 I/Os completed (+1909) 00:14:01.007 00:14:01.938 QEMU NVMe Ctrl (12340 ): 21692 I/Os completed (+1822) 00:14:01.938 QEMU NVMe Ctrl (12341 ): 21504 I/Os completed (+1760) 00:14:01.938 00:14:02.569 19:35:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:02.569 19:35:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:02.569 19:35:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.569 19:35:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.569 [2024-07-15 19:35:52.999449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:02.569 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:02.569 [2024-07-15 19:35:53.002743] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.002849] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.002903] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.002955] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:02.569 [2024-07-15 19:35:53.008628] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.008713] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.008751] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.008815] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.569 [2024-07-15 19:35:53.044190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:02.569 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:02.569 [2024-07-15 19:35:53.046825] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.046902] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.046938] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.046968] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:02.569 [2024-07-15 19:35:53.050006] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.050075] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.050107] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 [2024-07-15 19:35:53.050138] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.569 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:02.569 EAL: Scan for (pci) bus failed. 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:02.569 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:02.569 Attaching to 0000:00:10.0 00:14:02.569 Attached to 0000:00:10.0 00:14:02.826 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:02.826 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:02.826 19:35:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:02.826 Attaching to 0000:00:11.0 00:14:02.826 Attached to 0000:00:11.0 00:14:02.826 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:02.826 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:02.826 [2024-07-15 19:35:53.450694] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:15.022 19:36:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:15.022 19:36:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:15.022 19:36:05 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.26 00:14:15.022 19:36:05 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.26 00:14:15.022 19:36:05 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:15.022 19:36:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.26 00:14:15.022 19:36:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.26 2 00:14:15.022 remove_attach_helper took 43.26s to complete (handling 2 nvme drive(s)) 19:36:05 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73976 00:14:21.584 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73976) - No such process 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73976 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74509 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:21.584 19:36:11 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74509 00:14:21.584 19:36:11 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74509 ']' 00:14:21.584 19:36:11 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.584 19:36:11 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.584 19:36:11 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.584 19:36:11 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.584 19:36:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:21.584 [2024-07-15 19:36:11.594273] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:14:21.584 [2024-07-15 19:36:11.594463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74509 ] 00:14:21.584 [2024-07-15 19:36:11.769375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.584 [2024-07-15 19:36:12.002587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.516 19:36:12 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.516 19:36:12 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:14:22.516 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:14:22.517 19:36:12 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:22.517 19:36:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:29.074 19:36:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:29.074 19:36:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:29.074 19:36:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:29.074 [2024-07-15 19:36:19.049624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:29.074 [2024-07-15 19:36:19.052695] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.052739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.052762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 [2024-07-15 19:36:19.052812] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.052831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.052847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 [2024-07-15 19:36:19.052867] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.052891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.052908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 [2024-07-15 19:36:19.052923] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.052942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.052957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:29.074 19:36:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.074 19:36:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:29.074 19:36:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:29.074 [2024-07-15 19:36:19.549662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:29.074 [2024-07-15 19:36:19.552822] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.552878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.552898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 [2024-07-15 19:36:19.552928] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.552942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.552959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 [2024-07-15 19:36:19.552975] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.552991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.553005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 [2024-07-15 19:36:19.553023] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:29.074 [2024-07-15 19:36:19.553036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.074 [2024-07-15 19:36:19.553054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:29.074 19:36:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.074 19:36:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.074 19:36:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:29.074 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:29.332 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:29.332 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:29.332 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:29.332 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:29.332 19:36:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:29.332 19:36:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:29.332 19:36:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:29.332 19:36:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:41.521 19:36:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.521 19:36:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:41.521 19:36:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:41.521 [2024-07-15 19:36:32.149953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:41.521 [2024-07-15 19:36:32.152818] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:41.521 [2024-07-15 19:36:32.152866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.521 [2024-07-15 19:36:32.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.521 [2024-07-15 19:36:32.152914] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:41.521 [2024-07-15 19:36:32.152930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.521 [2024-07-15 19:36:32.152943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.521 [2024-07-15 19:36:32.152960] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:41.521 [2024-07-15 19:36:32.152973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.521 [2024-07-15 19:36:32.152988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.521 [2024-07-15 19:36:32.153002] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:41.521 [2024-07-15 19:36:32.153016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.521 [2024-07-15 19:36:32.153029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:41.521 19:36:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.521 19:36:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:41.521 19:36:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:41.521 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:42.087 [2024-07-15 19:36:32.649980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:42.087 [2024-07-15 19:36:32.652471] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:42.087 [2024-07-15 19:36:32.652522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.087 [2024-07-15 19:36:32.652540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.087 [2024-07-15 19:36:32.652569] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:42.087 [2024-07-15 19:36:32.652582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.087 [2024-07-15 19:36:32.652597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.087 [2024-07-15 19:36:32.652610] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:42.088 [2024-07-15 19:36:32.652624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.088 [2024-07-15 19:36:32.652636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.088 [2024-07-15 19:36:32.652651] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:42.088 [2024-07-15 19:36:32.652663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:42.088 [2024-07-15 19:36:32.652677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:42.088 19:36:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.088 19:36:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:42.088 19:36:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:42.088 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:42.345 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:42.345 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:42.345 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:42.345 19:36:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:42.345 19:36:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:54.583 19:36:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.583 19:36:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:54.583 19:36:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:54.583 [2024-07-15 19:36:45.250337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:54.583 [2024-07-15 19:36:45.253270] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.583 [2024-07-15 19:36:45.253315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.583 [2024-07-15 19:36:45.253335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.583 [2024-07-15 19:36:45.253360] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.583 [2024-07-15 19:36:45.253375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.583 [2024-07-15 19:36:45.253387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.583 [2024-07-15 19:36:45.253406] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.583 [2024-07-15 19:36:45.253417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.583 [2024-07-15 19:36:45.253431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.583 [2024-07-15 19:36:45.253444] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.583 [2024-07-15 19:36:45.253458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.583 [2024-07-15 19:36:45.253470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:54.583 19:36:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.583 19:36:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:54.583 19:36:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:54.583 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:55.150 [2024-07-15 19:36:45.650362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:55.150 [2024-07-15 19:36:45.653127] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.150 [2024-07-15 19:36:45.653180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.150 [2024-07-15 19:36:45.653199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.150 [2024-07-15 19:36:45.653230] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.150 [2024-07-15 19:36:45.653244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.150 [2024-07-15 19:36:45.653262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.150 [2024-07-15 19:36:45.653278] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.150 [2024-07-15 19:36:45.653295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.150 [2024-07-15 19:36:45.653308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.150 [2024-07-15 19:36:45.653332] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.150 [2024-07-15 19:36:45.653344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.150 [2024-07-15 19:36:45.653362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:55.150 19:36:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.150 19:36:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:55.150 19:36:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:55.150 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:55.409 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:55.409 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:55.409 19:36:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:55.409 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:55.409 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:55.409 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:55.409 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:55.409 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:55.409 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:55.668 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:55.668 19:36:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:07.876 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.30 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.30 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.30 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.30 2 00:15:07.877 remove_attach_helper took 45.30s to complete (handling 2 nvme drive(s)) 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:15:07.877 19:36:58 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:07.877 19:36:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:14.485 19:37:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.485 19:37:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:14.485 [2024-07-15 19:37:04.385705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:14.485 [2024-07-15 19:37:04.388275] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.388330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.388361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 [2024-07-15 19:37:04.388398] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.388421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.388442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 [2024-07-15 19:37:04.388469] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.388489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.388513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 [2024-07-15 19:37:04.388534] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.388557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 19:37:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:14.485 [2024-07-15 19:37:04.785690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:14.485 [2024-07-15 19:37:04.788128] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.788188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.788215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 [2024-07-15 19:37:04.788250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.788270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.788292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 [2024-07-15 19:37:04.788314] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.788337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.788357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 [2024-07-15 19:37:04.788382] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:14.485 [2024-07-15 19:37:04.788400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.485 [2024-07-15 19:37:04.788422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:14.485 19:37:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.485 19:37:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:14.485 19:37:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:14.485 19:37:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:14.485 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:14.744 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:14.744 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:14.744 19:37:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.947 19:37:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.947 19:37:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.947 19:37:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:26.947 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:26.948 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:26.948 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.948 19:37:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.948 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.948 19:37:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.948 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.948 [2024-07-15 19:37:17.485970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:26.948 [2024-07-15 19:37:17.487991] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.948 [2024-07-15 19:37:17.488044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.948 [2024-07-15 19:37:17.488069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.948 [2024-07-15 19:37:17.488095] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.948 [2024-07-15 19:37:17.488111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.948 [2024-07-15 19:37:17.488125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.948 [2024-07-15 19:37:17.488142] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.948 [2024-07-15 19:37:17.488155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.948 [2024-07-15 19:37:17.488171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.948 [2024-07-15 19:37:17.488185] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.948 [2024-07-15 19:37:17.488201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.948 [2024-07-15 19:37:17.488213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.948 19:37:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.948 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:26.948 19:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:27.206 [2024-07-15 19:37:17.886004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:27.206 [2024-07-15 19:37:17.888075] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:27.206 [2024-07-15 19:37:17.888129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.206 [2024-07-15 19:37:17.888147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.206 [2024-07-15 19:37:17.888176] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:27.206 [2024-07-15 19:37:17.888189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.206 [2024-07-15 19:37:17.888208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.206 [2024-07-15 19:37:17.888222] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:27.206 [2024-07-15 19:37:17.888237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.206 [2024-07-15 19:37:17.888251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.206 [2024-07-15 19:37:17.888267] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:27.206 [2024-07-15 19:37:17.888279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.206 [2024-07-15 19:37:17.888294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:27.465 19:37:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.465 19:37:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:27.465 19:37:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:27.465 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:27.724 19:37:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.924 19:37:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.924 19:37:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.924 19:37:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.924 19:37:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.924 19:37:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.924 [2024-07-15 19:37:30.586223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:39.924 [2024-07-15 19:37:30.588258] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.924 [2024-07-15 19:37:30.588304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.924 [2024-07-15 19:37:30.588327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.924 [2024-07-15 19:37:30.588354] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.924 [2024-07-15 19:37:30.588372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.924 [2024-07-15 19:37:30.588386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.924 [2024-07-15 19:37:30.588402] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.924 [2024-07-15 19:37:30.588415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.924 [2024-07-15 19:37:30.588434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.924 [2024-07-15 19:37:30.588447] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.924 [2024-07-15 19:37:30.588462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.924 [2024-07-15 19:37:30.588475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.924 19:37:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:39.924 19:37:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:40.490 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:40.490 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:40.490 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:40.490 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:40.490 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:40.490 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:40.491 19:37:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.491 19:37:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:40.491 19:37:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:40.491 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:40.748 [2024-07-15 19:37:31.286264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:40.748 [2024-07-15 19:37:31.288324] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:40.748 [2024-07-15 19:37:31.288378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.749 [2024-07-15 19:37:31.288398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.749 [2024-07-15 19:37:31.288427] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:40.749 [2024-07-15 19:37:31.288443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.749 [2024-07-15 19:37:31.288461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.749 [2024-07-15 19:37:31.288476] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:40.749 [2024-07-15 19:37:31.288493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.749 [2024-07-15 19:37:31.288507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.749 [2024-07-15 19:37:31.288525] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:40.749 [2024-07-15 19:37:31.288538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.749 [2024-07-15 19:37:31.288559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:41.006 19:37:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.006 19:37:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:41.006 19:37:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:41.006 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:41.263 19:37:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:41.263 19:37:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:41.263 19:37:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:41.263 19:37:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.80 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.80 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.80 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.80 2 00:15:53.522 remove_attach_helper took 45.80s to complete (handling 2 nvme drive(s)) 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:53.522 19:37:44 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74509 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74509 ']' 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74509 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74509 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74509' 00:15:53.522 killing process with pid 74509 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74509 00:15:53.522 19:37:44 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74509 00:15:56.803 19:37:47 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:56.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:57.370 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:57.370 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:57.370 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:57.370 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:57.629 00:15:57.629 real 2m34.254s 00:15:57.629 user 1m52.684s 00:15:57.629 sys 0m22.152s 00:15:57.629 ************************************ 00:15:57.629 END TEST sw_hotplug 00:15:57.629 ************************************ 00:15:57.629 19:37:48 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.629 19:37:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:57.629 19:37:48 -- common/autotest_common.sh@1142 -- # return 0 00:15:57.629 19:37:48 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:15:57.629 19:37:48 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:57.629 19:37:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:57.629 19:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.629 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:15:57.629 ************************************ 00:15:57.629 START TEST nvme_xnvme 00:15:57.629 ************************************ 00:15:57.629 19:37:48 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:57.629 * Looking for test storage... 00:15:57.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:57.629 19:37:48 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.629 19:37:48 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.629 19:37:48 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.629 19:37:48 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.629 19:37:48 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.629 19:37:48 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.629 19:37:48 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.629 19:37:48 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:57.629 19:37:48 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.629 19:37:48 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:57.629 19:37:48 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:57.629 19:37:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.629 19:37:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.629 ************************************ 00:15:57.629 START TEST xnvme_to_malloc_dd_copy 00:15:57.629 ************************************ 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:57.629 19:37:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:57.629 { 00:15:57.629 "subsystems": [ 00:15:57.629 { 00:15:57.629 "subsystem": "bdev", 00:15:57.629 "config": [ 00:15:57.629 { 00:15:57.629 "params": { 00:15:57.629 "block_size": 512, 00:15:57.629 "num_blocks": 2097152, 00:15:57.629 "name": "malloc0" 00:15:57.629 }, 00:15:57.629 "method": "bdev_malloc_create" 00:15:57.629 }, 00:15:57.629 { 00:15:57.629 "params": { 00:15:57.629 "io_mechanism": "libaio", 00:15:57.629 "filename": "/dev/nullb0", 00:15:57.629 "name": "null0" 00:15:57.629 }, 00:15:57.629 "method": "bdev_xnvme_create" 00:15:57.629 }, 00:15:57.629 { 00:15:57.629 "method": "bdev_wait_for_examine" 00:15:57.629 } 00:15:57.629 ] 00:15:57.629 } 00:15:57.629 ] 00:15:57.629 } 00:15:57.888 [2024-07-15 19:37:48.445949] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:15:57.888 [2024-07-15 19:37:48.446245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75879 ] 00:15:57.888 [2024-07-15 19:37:48.615597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.455 [2024-07-15 19:37:48.954126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.943  Copying: 223/1024 [MB] (223 MBps) Copying: 459/1024 [MB] (235 MBps) Copying: 691/1024 [MB] (231 MBps) Copying: 924/1024 [MB] (233 MBps) Copying: 1024/1024 [MB] (average 231 MBps) 00:16:08.943 00:16:08.943 19:37:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:08.943 19:37:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:08.943 19:37:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:08.943 19:37:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:09.200 { 00:16:09.200 "subsystems": [ 00:16:09.200 { 00:16:09.200 "subsystem": "bdev", 00:16:09.200 "config": [ 00:16:09.200 { 00:16:09.200 "params": { 00:16:09.200 "block_size": 512, 00:16:09.200 "num_blocks": 2097152, 00:16:09.200 "name": "malloc0" 00:16:09.200 }, 00:16:09.200 "method": "bdev_malloc_create" 00:16:09.200 }, 00:16:09.200 { 00:16:09.200 "params": { 00:16:09.200 "io_mechanism": "libaio", 00:16:09.200 "filename": "/dev/nullb0", 00:16:09.200 "name": "null0" 00:16:09.200 }, 00:16:09.200 "method": "bdev_xnvme_create" 00:16:09.200 }, 00:16:09.200 { 00:16:09.200 "method": "bdev_wait_for_examine" 00:16:09.200 } 00:16:09.200 ] 00:16:09.200 } 00:16:09.200 ] 00:16:09.200 } 00:16:09.200 [2024-07-15 19:37:59.833194] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:16:09.200 [2024-07-15 19:37:59.833361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:16:09.458 [2024-07-15 19:38:00.019181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.715 [2024-07-15 19:38:00.271103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.812  Copying: 239/1024 [MB] (239 MBps) Copying: 488/1024 [MB] (249 MBps) Copying: 732/1024 [MB] (244 MBps) Copying: 975/1024 [MB] (243 MBps) Copying: 1024/1024 [MB] (average 243 MBps) 00:16:20.812 00:16:20.812 19:38:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:20.812 19:38:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:20.812 19:38:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:20.812 19:38:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:20.812 19:38:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:20.812 19:38:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:20.812 { 00:16:20.812 "subsystems": [ 00:16:20.812 { 00:16:20.812 "subsystem": "bdev", 00:16:20.812 "config": [ 00:16:20.812 { 00:16:20.812 "params": { 00:16:20.812 "block_size": 512, 00:16:20.812 "num_blocks": 2097152, 00:16:20.812 "name": "malloc0" 00:16:20.812 }, 00:16:20.812 "method": "bdev_malloc_create" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "params": { 00:16:20.812 "io_mechanism": "io_uring", 00:16:20.812 "filename": "/dev/nullb0", 00:16:20.812 "name": "null0" 00:16:20.812 }, 00:16:20.812 "method": "bdev_xnvme_create" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "method": "bdev_wait_for_examine" 00:16:20.812 } 00:16:20.812 ] 00:16:20.812 } 00:16:20.812 ] 00:16:20.812 } 00:16:20.812 [2024-07-15 19:38:10.790370] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:16:20.812 [2024-07-15 19:38:10.790565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76122 ] 00:16:20.812 [2024-07-15 19:38:10.971572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.812 [2024-07-15 19:38:11.213303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.476  Copying: 254/1024 [MB] (254 MBps) Copying: 506/1024 [MB] (252 MBps) Copying: 762/1024 [MB] (255 MBps) Copying: 1013/1024 [MB] (251 MBps) Copying: 1024/1024 [MB] (average 253 MBps) 00:16:31.476 00:16:31.476 19:38:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:31.476 19:38:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:31.476 19:38:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:31.476 19:38:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:31.476 { 00:16:31.476 "subsystems": [ 00:16:31.476 { 00:16:31.476 "subsystem": "bdev", 00:16:31.476 "config": [ 00:16:31.476 { 00:16:31.476 "params": { 00:16:31.476 "block_size": 512, 00:16:31.476 "num_blocks": 2097152, 00:16:31.476 "name": "malloc0" 00:16:31.476 }, 00:16:31.476 "method": "bdev_malloc_create" 00:16:31.476 }, 00:16:31.476 { 00:16:31.476 "params": { 00:16:31.476 "io_mechanism": "io_uring", 00:16:31.476 "filename": "/dev/nullb0", 00:16:31.476 "name": "null0" 00:16:31.476 }, 00:16:31.476 "method": "bdev_xnvme_create" 00:16:31.476 }, 00:16:31.476 { 00:16:31.476 "method": "bdev_wait_for_examine" 00:16:31.476 } 00:16:31.476 ] 00:16:31.476 } 00:16:31.476 ] 00:16:31.476 } 00:16:31.476 [2024-07-15 19:38:21.431933] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:16:31.476 [2024-07-15 19:38:21.432102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76245 ] 00:16:31.476 [2024-07-15 19:38:21.614802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.476 [2024-07-15 19:38:21.851533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.000  Copying: 260/1024 [MB] (260 MBps) Copying: 522/1024 [MB] (262 MBps) Copying: 783/1024 [MB] (261 MBps) Copying: 1022/1024 [MB] (239 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:16:41.000 00:16:41.000 19:38:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:41.000 19:38:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:16:41.259 ************************************ 00:16:41.259 END TEST xnvme_to_malloc_dd_copy 00:16:41.259 ************************************ 00:16:41.259 00:16:41.259 real 0m43.461s 00:16:41.259 user 0m38.555s 00:16:41.259 sys 0m4.378s 00:16:41.259 19:38:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.259 19:38:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:41.259 19:38:31 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:41.259 19:38:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:41.259 19:38:31 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:41.259 19:38:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.259 19:38:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.259 ************************************ 00:16:41.259 START TEST xnvme_bdevperf 00:16:41.259 ************************************ 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:41.259 19:38:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.259 { 00:16:41.259 "subsystems": [ 00:16:41.259 { 00:16:41.259 "subsystem": "bdev", 00:16:41.259 "config": [ 00:16:41.259 { 00:16:41.259 "params": { 00:16:41.259 "io_mechanism": "libaio", 00:16:41.259 "filename": "/dev/nullb0", 00:16:41.259 "name": "null0" 00:16:41.259 }, 00:16:41.259 "method": "bdev_xnvme_create" 00:16:41.259 }, 00:16:41.259 { 00:16:41.259 "method": "bdev_wait_for_examine" 00:16:41.259 } 00:16:41.259 ] 00:16:41.259 } 00:16:41.259 ] 00:16:41.259 } 00:16:41.259 [2024-07-15 19:38:31.973751] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:16:41.259 [2024-07-15 19:38:31.973890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76392 ] 00:16:41.517 [2024-07-15 19:38:32.138260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.774 [2024-07-15 19:38:32.373281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.030 Running I/O for 5 seconds... 00:16:47.307 00:16:47.307 Latency(us) 00:16:47.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.308 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:47.308 null0 : 5.00 154807.59 604.72 0.00 0.00 410.89 130.68 5929.45 00:16:47.308 =================================================================================================================== 00:16:47.308 Total : 154807.59 604.72 0.00 0.00 410.89 130.68 5929.45 00:16:48.687 19:38:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:48.687 19:38:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:48.687 19:38:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:48.687 19:38:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:48.687 19:38:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:48.687 19:38:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:48.687 { 00:16:48.687 "subsystems": [ 00:16:48.687 { 00:16:48.687 "subsystem": "bdev", 00:16:48.687 "config": [ 00:16:48.687 { 00:16:48.687 "params": { 00:16:48.687 "io_mechanism": "io_uring", 00:16:48.687 "filename": "/dev/nullb0", 00:16:48.687 "name": "null0" 00:16:48.687 }, 00:16:48.687 "method": "bdev_xnvme_create" 00:16:48.687 }, 00:16:48.687 { 00:16:48.687 "method": "bdev_wait_for_examine" 00:16:48.687 } 00:16:48.687 ] 00:16:48.687 } 00:16:48.687 ] 00:16:48.687 } 00:16:48.687 [2024-07-15 19:38:39.237211] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:16:48.687 [2024-07-15 19:38:39.237388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76473 ] 00:16:48.687 [2024-07-15 19:38:39.413639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.946 [2024-07-15 19:38:39.648236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.513 Running I/O for 5 seconds... 00:16:54.804 00:16:54.804 Latency(us) 00:16:54.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.804 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:54.804 null0 : 5.00 196440.41 767.35 0.00 0.00 323.25 197.97 628.05 00:16:54.804 =================================================================================================================== 00:16:54.804 Total : 196440.41 767.35 0.00 0.00 323.25 197.97 628.05 00:16:55.792 19:38:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:55.792 19:38:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:16:55.792 00:16:55.792 real 0m14.580s 00:16:55.792 user 0m11.259s 00:16:55.792 sys 0m3.107s 00:16:55.792 19:38:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.792 19:38:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:55.792 ************************************ 00:16:55.792 END TEST xnvme_bdevperf 00:16:55.792 ************************************ 00:16:55.792 19:38:46 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:55.792 ************************************ 00:16:55.792 END TEST nvme_xnvme 00:16:55.792 ************************************ 00:16:55.792 00:16:55.792 real 0m58.251s 00:16:55.792 user 0m49.886s 00:16:55.792 sys 0m7.624s 00:16:55.792 19:38:46 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.792 19:38:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:55.792 19:38:46 -- common/autotest_common.sh@1142 -- # return 0 00:16:55.792 19:38:46 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:55.792 19:38:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:55.792 19:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.792 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:16:55.792 ************************************ 00:16:55.792 START TEST blockdev_xnvme 00:16:55.792 ************************************ 00:16:55.792 19:38:46 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:56.049 * Looking for test storage... 00:16:56.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:16:56.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76613 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76613 00:16:56.049 19:38:46 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76613 ']' 00:16:56.049 19:38:46 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.049 19:38:46 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.049 19:38:46 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:56.049 19:38:46 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.049 19:38:46 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.049 19:38:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.049 [2024-07-15 19:38:46.774660] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:16:56.049 [2024-07-15 19:38:46.775701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76613 ] 00:16:56.307 [2024-07-15 19:38:46.960617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.566 [2024-07-15 19:38:47.198616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.502 19:38:48 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.502 19:38:48 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:16:57.502 19:38:48 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:16:57.502 19:38:48 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:16:57.502 19:38:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:57.502 19:38:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:57.502 19:38:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:58.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.069 Waiting for block devices as requested 00:16:58.328 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.328 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.587 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.587 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.907 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:03.907 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.907 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:17:03.908 nvme0n1 00:17:03.908 nvme1n1 00:17:03.908 nvme2n1 00:17:03.908 nvme2n2 00:17:03.908 nvme2n3 00:17:03.908 nvme3n1 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2599b578-b650-4a0d-a091-a0d0647318f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2599b578-b650-4a0d-a091-a0d0647318f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "180d98c3-ac42-4bf8-bbec-abe1f8a24d04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "180d98c3-ac42-4bf8-bbec-abe1f8a24d04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c49e6a02-960c-40d2-9564-50765a021fd2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c49e6a02-960c-40d2-9564-50765a021fd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "46554e1b-edbf-453c-8ea6-0b2b8f1b31e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "46554e1b-edbf-453c-8ea6-0b2b8f1b31e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "1b001754-8ab1-41cd-b1b1-a59c628ddec0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b001754-8ab1-41cd-b1b1-a59c628ddec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3136509c-5a7e-424f-8591-850247288d58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3136509c-5a7e-424f-8591-850247288d58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:17:03.908 19:38:54 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 76613 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76613 ']' 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76613 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76613 00:17:03.908 killing process with pid 76613 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76613' 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76613 00:17:03.908 19:38:54 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76613 00:17:07.189 19:38:57 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:07.189 19:38:57 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:07.189 19:38:57 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:07.189 19:38:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.189 19:38:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 ************************************ 00:17:07.189 START TEST bdev_hello_world 00:17:07.189 ************************************ 00:17:07.189 19:38:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:07.189 [2024-07-15 19:38:57.538451] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:07.189 [2024-07-15 19:38:57.538611] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77001 ] 00:17:07.189 [2024-07-15 19:38:57.711140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.518 [2024-07-15 19:38:58.015907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.823 [2024-07-15 19:38:58.513564] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:07.823 [2024-07-15 19:38:58.513618] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:07.823 [2024-07-15 19:38:58.513639] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:07.823 [2024-07-15 19:38:58.515782] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:07.823 [2024-07-15 19:38:58.516139] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:07.823 [2024-07-15 19:38:58.516157] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:07.824 [2024-07-15 19:38:58.516314] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:07.824 00:17:07.824 [2024-07-15 19:38:58.516336] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:09.201 00:17:09.201 real 0m2.459s 00:17:09.201 user 0m2.095s 00:17:09.201 sys 0m0.247s 00:17:09.201 19:38:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.201 19:38:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:09.201 ************************************ 00:17:09.201 END TEST bdev_hello_world 00:17:09.201 ************************************ 00:17:09.201 19:38:59 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:09.201 19:38:59 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:17:09.201 19:38:59 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.201 19:38:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.201 19:38:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:09.201 ************************************ 00:17:09.201 START TEST bdev_bounds 00:17:09.201 ************************************ 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:17:09.201 Process bdevio pid: 77043 00:17:09.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77043 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77043' 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77043 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 77043 ']' 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.201 19:38:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:09.460 [2024-07-15 19:39:00.067172] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:09.460 [2024-07-15 19:39:00.067355] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77043 ] 00:17:09.460 [2024-07-15 19:39:00.248844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.028 [2024-07-15 19:39:00.555017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.028 [2024-07-15 19:39:00.555186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.028 [2024-07-15 19:39:00.555226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.596 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.596 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:17:10.596 19:39:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:10.596 I/O targets: 00:17:10.596 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:10.596 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:10.596 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:10.596 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:10.596 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:10.596 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:10.596 00:17:10.596 00:17:10.596 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.596 http://cunit.sourceforge.net/ 00:17:10.596 00:17:10.596 00:17:10.596 Suite: bdevio tests on: nvme3n1 00:17:10.596 Test: blockdev write read block ...passed 00:17:10.596 Test: blockdev write zeroes read block ...passed 00:17:10.596 Test: blockdev write zeroes read no split ...passed 00:17:10.596 Test: blockdev write zeroes read split ...passed 00:17:10.596 Test: blockdev write zeroes read split partial ...passed 00:17:10.596 Test: blockdev reset ...passed 00:17:10.596 Test: blockdev write read 8 blocks ...passed 00:17:10.596 Test: blockdev write read size > 128k ...passed 00:17:10.596 Test: blockdev write read invalid size ...passed 00:17:10.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.597 Test: blockdev write read max offset ...passed 00:17:10.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.597 Test: blockdev writev readv 8 blocks ...passed 00:17:10.597 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.597 Test: blockdev writev readv block ...passed 00:17:10.597 Test: blockdev writev readv size > 128k ...passed 00:17:10.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.597 Test: blockdev comparev and writev ...passed 00:17:10.597 Test: blockdev nvme passthru rw ...passed 00:17:10.597 Test: blockdev nvme passthru vendor specific ...passed 00:17:10.597 Test: blockdev nvme admin passthru ...passed 00:17:10.597 Test: blockdev copy ...passed 00:17:10.597 Suite: bdevio tests on: nvme2n3 00:17:10.597 Test: blockdev write read block ...passed 00:17:10.597 Test: blockdev write zeroes read block ...passed 00:17:10.597 Test: blockdev write zeroes read no split ...passed 00:17:10.597 Test: blockdev write zeroes read split ...passed 00:17:10.855 Test: blockdev write zeroes read split partial ...passed 00:17:10.855 Test: blockdev reset ...passed 00:17:10.855 Test: blockdev write read 8 blocks ...passed 00:17:10.855 Test: blockdev write read size > 128k ...passed 00:17:10.855 Test: blockdev write read invalid size ...passed 00:17:10.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.855 Test: blockdev write read max offset ...passed 00:17:10.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.855 Test: blockdev writev readv 8 blocks ...passed 00:17:10.855 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.855 Test: blockdev writev readv block ...passed 00:17:10.855 Test: blockdev writev readv size > 128k ...passed 00:17:10.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.855 Test: blockdev comparev and writev ...passed 00:17:10.855 Test: blockdev nvme passthru rw ...passed 00:17:10.855 Test: blockdev nvme passthru vendor specific ...passed 00:17:10.855 Test: blockdev nvme admin passthru ...passed 00:17:10.855 Test: blockdev copy ...passed 00:17:10.855 Suite: bdevio tests on: nvme2n2 00:17:10.855 Test: blockdev write read block ...passed 00:17:10.855 Test: blockdev write zeroes read block ...passed 00:17:10.855 Test: blockdev write zeroes read no split ...passed 00:17:10.855 Test: blockdev write zeroes read split ...passed 00:17:10.855 Test: blockdev write zeroes read split partial ...passed 00:17:10.855 Test: blockdev reset ...passed 00:17:10.855 Test: blockdev write read 8 blocks ...passed 00:17:10.855 Test: blockdev write read size > 128k ...passed 00:17:10.855 Test: blockdev write read invalid size ...passed 00:17:10.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.855 Test: blockdev write read max offset ...passed 00:17:10.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.855 Test: blockdev writev readv 8 blocks ...passed 00:17:10.855 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.855 Test: blockdev writev readv block ...passed 00:17:10.855 Test: blockdev writev readv size > 128k ...passed 00:17:10.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.855 Test: blockdev comparev and writev ...passed 00:17:10.855 Test: blockdev nvme passthru rw ...passed 00:17:10.855 Test: blockdev nvme passthru vendor specific ...passed 00:17:10.855 Test: blockdev nvme admin passthru ...passed 00:17:10.855 Test: blockdev copy ...passed 00:17:10.856 Suite: bdevio tests on: nvme2n1 00:17:10.856 Test: blockdev write read block ...passed 00:17:10.856 Test: blockdev write zeroes read block ...passed 00:17:10.856 Test: blockdev write zeroes read no split ...passed 00:17:10.856 Test: blockdev write zeroes read split ...passed 00:17:10.856 Test: blockdev write zeroes read split partial ...passed 00:17:10.856 Test: blockdev reset ...passed 00:17:10.856 Test: blockdev write read 8 blocks ...passed 00:17:10.856 Test: blockdev write read size > 128k ...passed 00:17:10.856 Test: blockdev write read invalid size ...passed 00:17:10.856 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.856 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.856 Test: blockdev write read max offset ...passed 00:17:10.856 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.856 Test: blockdev writev readv 8 blocks ...passed 00:17:10.856 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.856 Test: blockdev writev readv block ...passed 00:17:10.856 Test: blockdev writev readv size > 128k ...passed 00:17:10.856 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.856 Test: blockdev comparev and writev ...passed 00:17:10.856 Test: blockdev nvme passthru rw ...passed 00:17:10.856 Test: blockdev nvme passthru vendor specific ...passed 00:17:10.856 Test: blockdev nvme admin passthru ...passed 00:17:10.856 Test: blockdev copy ...passed 00:17:10.856 Suite: bdevio tests on: nvme1n1 00:17:10.856 Test: blockdev write read block ...passed 00:17:10.856 Test: blockdev write zeroes read block ...passed 00:17:10.856 Test: blockdev write zeroes read no split ...passed 00:17:10.856 Test: blockdev write zeroes read split ...passed 00:17:11.114 Test: blockdev write zeroes read split partial ...passed 00:17:11.114 Test: blockdev reset ...passed 00:17:11.114 Test: blockdev write read 8 blocks ...passed 00:17:11.114 Test: blockdev write read size > 128k ...passed 00:17:11.114 Test: blockdev write read invalid size ...passed 00:17:11.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:11.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:11.114 Test: blockdev write read max offset ...passed 00:17:11.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:11.114 Test: blockdev writev readv 8 blocks ...passed 00:17:11.114 Test: blockdev writev readv 30 x 1block ...passed 00:17:11.114 Test: blockdev writev readv block ...passed 00:17:11.114 Test: blockdev writev readv size > 128k ...passed 00:17:11.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:11.114 Test: blockdev comparev and writev ...passed 00:17:11.114 Test: blockdev nvme passthru rw ...passed 00:17:11.114 Test: blockdev nvme passthru vendor specific ...passed 00:17:11.114 Test: blockdev nvme admin passthru ...passed 00:17:11.114 Test: blockdev copy ...passed 00:17:11.114 Suite: bdevio tests on: nvme0n1 00:17:11.114 Test: blockdev write read block ...passed 00:17:11.114 Test: blockdev write zeroes read block ...passed 00:17:11.114 Test: blockdev write zeroes read no split ...passed 00:17:11.114 Test: blockdev write zeroes read split ...passed 00:17:11.114 Test: blockdev write zeroes read split partial ...passed 00:17:11.114 Test: blockdev reset ...passed 00:17:11.114 Test: blockdev write read 8 blocks ...passed 00:17:11.114 Test: blockdev write read size > 128k ...passed 00:17:11.114 Test: blockdev write read invalid size ...passed 00:17:11.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:11.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:11.114 Test: blockdev write read max offset ...passed 00:17:11.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:11.114 Test: blockdev writev readv 8 blocks ...passed 00:17:11.114 Test: blockdev writev readv 30 x 1block ...passed 00:17:11.114 Test: blockdev writev readv block ...passed 00:17:11.114 Test: blockdev writev readv size > 128k ...passed 00:17:11.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:11.114 Test: blockdev comparev and writev ...passed 00:17:11.114 Test: blockdev nvme passthru rw ...passed 00:17:11.114 Test: blockdev nvme passthru vendor specific ...passed 00:17:11.114 Test: blockdev nvme admin passthru ...passed 00:17:11.114 Test: blockdev copy ...passed 00:17:11.114 00:17:11.114 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.114 suites 6 6 n/a 0 0 00:17:11.114 tests 138 138 138 0 0 00:17:11.114 asserts 780 780 780 0 n/a 00:17:11.114 00:17:11.114 Elapsed time = 1.428 seconds 00:17:11.114 0 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77043 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 77043 ']' 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 77043 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77043 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77043' 00:17:11.114 killing process with pid 77043 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 77043 00:17:11.114 19:39:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 77043 00:17:12.488 19:39:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:17:12.488 00:17:12.488 real 0m3.215s 00:17:12.488 user 0m7.428s 00:17:12.488 sys 0m0.447s 00:17:12.488 19:39:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.488 19:39:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:12.488 ************************************ 00:17:12.488 END TEST bdev_bounds 00:17:12.488 ************************************ 00:17:12.488 19:39:03 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:12.488 19:39:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:12.488 19:39:03 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:12.488 19:39:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.488 19:39:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.488 ************************************ 00:17:12.488 START TEST bdev_nbd 00:17:12.488 ************************************ 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77122 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77122 /var/tmp/spdk-nbd.sock 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 77122 ']' 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.488 19:39:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:12.745 [2024-07-15 19:39:03.359260] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:12.745 [2024-07-15 19:39:03.359439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.002 [2024-07-15 19:39:03.546838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.002 [2024-07-15 19:39:03.783512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:13.566 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.824 1+0 records in 00:17:13.824 1+0 records out 00:17:13.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625282 s, 6.6 MB/s 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:13.824 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.083 1+0 records in 00:17:14.083 1+0 records out 00:17:14.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683137 s, 6.0 MB/s 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:14.083 19:39:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.340 1+0 records in 00:17:14.340 1+0 records out 00:17:14.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440221 s, 9.3 MB/s 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.340 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:14.341 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.341 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:14.341 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:14.341 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:14.341 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:14.341 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:17:14.598 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.599 1+0 records in 00:17:14.599 1+0 records out 00:17:14.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419875 s, 9.8 MB/s 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:14.599 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.857 1+0 records in 00:17:14.857 1+0 records out 00:17:14.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792425 s, 5.2 MB/s 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:14.857 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:15.136 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.136 1+0 records in 00:17:15.136 1+0 records out 00:17:15.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874424 s, 4.7 MB/s 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:15.137 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:15.394 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd0", 00:17:15.394 "bdev_name": "nvme0n1" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd1", 00:17:15.394 "bdev_name": "nvme1n1" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd2", 00:17:15.394 "bdev_name": "nvme2n1" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd3", 00:17:15.394 "bdev_name": "nvme2n2" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd4", 00:17:15.394 "bdev_name": "nvme2n3" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd5", 00:17:15.394 "bdev_name": "nvme3n1" 00:17:15.394 } 00:17:15.394 ]' 00:17:15.394 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:15.394 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd0", 00:17:15.394 "bdev_name": "nvme0n1" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd1", 00:17:15.394 "bdev_name": "nvme1n1" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd2", 00:17:15.394 "bdev_name": "nvme2n1" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd3", 00:17:15.394 "bdev_name": "nvme2n2" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd4", 00:17:15.394 "bdev_name": "nvme2n3" 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "nbd_device": "/dev/nbd5", 00:17:15.394 "bdev_name": "nvme3n1" 00:17:15.394 } 00:17:15.394 ]' 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.395 19:39:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.652 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.910 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.167 19:39:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:16.425 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:16.425 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:16.425 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:16.425 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.425 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.425 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:16.426 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:16.426 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.426 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.426 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.761 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:17.018 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:17.019 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:17.019 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:17.276 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:17.277 19:39:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:17.534 /dev/nbd0 00:17:17.534 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.534 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.534 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:17.534 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:17.534 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:17.534 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.535 1+0 records in 00:17:17.535 1+0 records out 00:17:17.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358831 s, 11.4 MB/s 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:17.535 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:17:17.792 /dev/nbd1 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.792 1+0 records in 00:17:17.792 1+0 records out 00:17:17.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824234 s, 5.0 MB/s 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:17.792 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:17:18.049 /dev/nbd10 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.050 1+0 records in 00:17:18.050 1+0 records out 00:17:18.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677598 s, 6.0 MB/s 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:18.050 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:17:18.307 /dev/nbd11 00:17:18.307 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:18.307 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:18.307 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:17:18.307 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.308 1+0 records in 00:17:18.308 1+0 records out 00:17:18.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627978 s, 6.5 MB/s 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:18.308 19:39:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:17:18.565 /dev/nbd12 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.565 1+0 records in 00:17:18.565 1+0 records out 00:17:18.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579024 s, 7.1 MB/s 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:18.565 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:18.824 /dev/nbd13 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.824 1+0 records in 00:17:18.824 1+0 records out 00:17:18.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553739 s, 7.4 MB/s 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.824 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:19.082 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:19.082 { 00:17:19.082 "nbd_device": "/dev/nbd0", 00:17:19.082 "bdev_name": "nvme0n1" 00:17:19.082 }, 00:17:19.082 { 00:17:19.082 "nbd_device": "/dev/nbd1", 00:17:19.082 "bdev_name": "nvme1n1" 00:17:19.082 }, 00:17:19.082 { 00:17:19.082 "nbd_device": "/dev/nbd10", 00:17:19.082 "bdev_name": "nvme2n1" 00:17:19.082 }, 00:17:19.082 { 00:17:19.083 "nbd_device": "/dev/nbd11", 00:17:19.083 "bdev_name": "nvme2n2" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd12", 00:17:19.083 "bdev_name": "nvme2n3" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd13", 00:17:19.083 "bdev_name": "nvme3n1" 00:17:19.083 } 00:17:19.083 ]' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd0", 00:17:19.083 "bdev_name": "nvme0n1" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd1", 00:17:19.083 "bdev_name": "nvme1n1" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd10", 00:17:19.083 "bdev_name": "nvme2n1" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd11", 00:17:19.083 "bdev_name": "nvme2n2" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd12", 00:17:19.083 "bdev_name": "nvme2n3" 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "nbd_device": "/dev/nbd13", 00:17:19.083 "bdev_name": "nvme3n1" 00:17:19.083 } 00:17:19.083 ]' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:19.083 /dev/nbd1 00:17:19.083 /dev/nbd10 00:17:19.083 /dev/nbd11 00:17:19.083 /dev/nbd12 00:17:19.083 /dev/nbd13' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:19.083 /dev/nbd1 00:17:19.083 /dev/nbd10 00:17:19.083 /dev/nbd11 00:17:19.083 /dev/nbd12 00:17:19.083 /dev/nbd13' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:19.083 256+0 records in 00:17:19.083 256+0 records out 00:17:19.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0081353 s, 129 MB/s 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:19.083 256+0 records in 00:17:19.083 256+0 records out 00:17:19.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123209 s, 8.5 MB/s 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:19.083 19:39:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:19.342 256+0 records in 00:17:19.342 256+0 records out 00:17:19.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144772 s, 7.2 MB/s 00:17:19.342 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:19.342 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:19.600 256+0 records in 00:17:19.600 256+0 records out 00:17:19.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134116 s, 7.8 MB/s 00:17:19.600 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:19.600 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:19.600 256+0 records in 00:17:19.600 256+0 records out 00:17:19.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121837 s, 8.6 MB/s 00:17:19.600 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:19.600 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:19.859 256+0 records in 00:17:19.859 256+0 records out 00:17:19.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12932 s, 8.1 MB/s 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:19.859 256+0 records in 00:17:19.859 256+0 records out 00:17:19.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127746 s, 8.2 MB/s 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.859 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.118 19:39:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:20.684 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.685 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.250 19:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.250 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.508 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:21.767 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:21.767 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:21.767 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:17:22.026 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:22.285 malloc_lvol_verify 00:17:22.285 19:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:22.285 afa91f55-4db9-486f-869a-1be0f6ab8074 00:17:22.285 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:22.550 4e3dffe5-a805-4524-8a11-4cebedc2fc86 00:17:22.550 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:22.818 /dev/nbd0 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:17:22.818 mke2fs 1.46.5 (30-Dec-2021) 00:17:22.818 Discarding device blocks: 0/4096 done 00:17:22.818 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:22.818 00:17:22.818 Allocating group tables: 0/1 done 00:17:22.818 Writing inode tables: 0/1 done 00:17:22.818 Creating journal (1024 blocks): done 00:17:22.818 Writing superblocks and filesystem accounting information: 0/1 done 00:17:22.818 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.818 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:22.819 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.819 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77122 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 77122 ']' 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 77122 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77122 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.078 killing process with pid 77122 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77122' 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 77122 00:17:23.078 19:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 77122 00:17:24.982 19:39:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:17:24.982 00:17:24.982 real 0m12.099s 00:17:24.982 user 0m15.978s 00:17:24.982 sys 0m4.758s 00:17:24.982 19:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.982 19:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:24.982 ************************************ 00:17:24.982 END TEST bdev_nbd 00:17:24.982 ************************************ 00:17:24.982 19:39:15 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:24.982 19:39:15 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:17:24.982 19:39:15 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:17:24.982 19:39:15 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:17:24.982 19:39:15 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:17:24.982 19:39:15 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:24.982 19:39:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.982 19:39:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:24.982 ************************************ 00:17:24.982 START TEST bdev_fio 00:17:24.982 ************************************ 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:24.982 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:24.982 ************************************ 00:17:24.982 START TEST bdev_fio_rw_verify 00:17:24.982 ************************************ 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:24.982 19:39:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:24.982 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:24.982 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:24.982 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:24.982 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:24.982 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:24.982 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:24.982 fio-3.35 00:17:24.982 Starting 6 threads 00:17:37.266 00:17:37.267 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77541: Mon Jul 15 19:39:26 2024 00:17:37.267 read: IOPS=29.6k, BW=116MiB/s (121MB/s)(1156MiB/10001msec) 00:17:37.267 slat (usec): min=2, max=988, avg= 6.73, stdev= 5.55 00:17:37.267 clat (usec): min=105, max=8809, avg=641.41, stdev=257.87 00:17:37.267 lat (usec): min=112, max=8815, avg=648.15, stdev=258.67 00:17:37.267 clat percentiles (usec): 00:17:37.267 | 50.000th=[ 644], 99.000th=[ 1303], 99.900th=[ 2114], 99.990th=[ 4621], 00:17:37.267 | 99.999th=[ 7046] 00:17:37.267 write: IOPS=29.9k, BW=117MiB/s (123MB/s)(1169MiB/10001msec); 0 zone resets 00:17:37.267 slat (usec): min=11, max=4058, avg=24.55, stdev=28.54 00:17:37.267 clat (usec): min=81, max=30894, avg=715.31, stdev=443.45 00:17:37.267 lat (usec): min=99, max=30930, avg=739.87, stdev=445.30 00:17:37.267 clat percentiles (usec): 00:17:37.267 | 50.000th=[ 701], 99.000th=[ 1418], 99.900th=[ 2507], 99.990th=[23987], 00:17:37.267 | 99.999th=[30802] 00:17:37.267 bw ( KiB/s): min=97143, max=149974, per=100.00%, avg=120796.37, stdev=2760.32, samples=114 00:17:37.267 iops : min=24285, max=37493, avg=30198.84, stdev=690.06, samples=114 00:17:37.267 lat (usec) : 100=0.01%, 250=3.58%, 500=20.70%, 750=39.70%, 1000=27.26% 00:17:37.267 lat (msec) : 2=8.59%, 4=0.13%, 10=0.03%, 50=0.01% 00:17:37.267 cpu : usr=56.44%, sys=29.95%, ctx=7286, majf=0, minf=25109 00:17:37.267 IO depths : 1=12.1%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.267 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.267 issued rwts: total=295889,299280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.267 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:37.267 00:17:37.267 Run status group 0 (all jobs): 00:17:37.267 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=1156MiB (1212MB), run=10001-10001msec 00:17:37.267 WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=1169MiB (1226MB), run=10001-10001msec 00:17:37.526 ----------------------------------------------------- 00:17:37.526 Suppressions used: 00:17:37.526 count bytes template 00:17:37.526 6 48 /usr/src/fio/parse.c 00:17:37.526 3180 305280 /usr/src/fio/iolog.c 00:17:37.526 1 8 libtcmalloc_minimal.so 00:17:37.526 1 904 libcrypto.so 00:17:37.526 ----------------------------------------------------- 00:17:37.526 00:17:37.526 00:17:37.526 real 0m12.806s 00:17:37.526 user 0m36.179s 00:17:37.526 sys 0m18.352s 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:37.526 ************************************ 00:17:37.526 END TEST bdev_fio_rw_verify 00:17:37.526 ************************************ 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.526 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2599b578-b650-4a0d-a091-a0d0647318f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2599b578-b650-4a0d-a091-a0d0647318f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "180d98c3-ac42-4bf8-bbec-abe1f8a24d04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "180d98c3-ac42-4bf8-bbec-abe1f8a24d04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c49e6a02-960c-40d2-9564-50765a021fd2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c49e6a02-960c-40d2-9564-50765a021fd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "46554e1b-edbf-453c-8ea6-0b2b8f1b31e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "46554e1b-edbf-453c-8ea6-0b2b8f1b31e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "1b001754-8ab1-41cd-b1b1-a59c628ddec0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b001754-8ab1-41cd-b1b1-a59c628ddec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3136509c-5a7e-424f-8591-850247288d58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3136509c-5a7e-424f-8591-850247288d58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.786 /home/vagrant/spdk_repo/spdk 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:17:37.786 00:17:37.786 real 0m13.003s 00:17:37.786 user 0m36.268s 00:17:37.786 sys 0m18.462s 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.786 19:39:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:37.786 ************************************ 00:17:37.786 END TEST bdev_fio 00:17:37.786 ************************************ 00:17:37.786 19:39:28 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:37.786 19:39:28 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:37.786 19:39:28 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:37.786 19:39:28 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:37.786 19:39:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.786 19:39:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.786 ************************************ 00:17:37.786 START TEST bdev_verify 00:17:37.786 ************************************ 00:17:37.786 19:39:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:37.786 [2024-07-15 19:39:28.520358] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:37.786 [2024-07-15 19:39:28.520496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77711 ] 00:17:38.045 [2024-07-15 19:39:28.688932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:38.304 [2024-07-15 19:39:28.964483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.304 [2024-07-15 19:39:28.964506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.871 Running I/O for 5 seconds... 00:17:44.140 00:17:44.140 Latency(us) 00:17:44.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.140 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x0 length 0xa0000 00:17:44.140 nvme0n1 : 5.02 1937.52 7.57 0.00 0.00 65955.33 10173.68 72401.68 00:17:44.140 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0xa0000 length 0xa0000 00:17:44.140 nvme0n1 : 5.02 1708.70 6.67 0.00 0.00 74782.23 9487.12 76895.57 00:17:44.140 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x0 length 0xbd0bd 00:17:44.140 nvme1n1 : 5.05 3246.20 12.68 0.00 0.00 39205.09 4587.52 75896.93 00:17:44.140 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:44.140 nvme1n1 : 5.06 3034.86 11.85 0.00 0.00 41968.41 4462.69 80890.15 00:17:44.140 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x0 length 0x80000 00:17:44.140 nvme2n1 : 5.05 1951.60 7.62 0.00 0.00 65312.98 6397.56 65411.17 00:17:44.140 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x80000 length 0x80000 00:17:44.140 nvme2n1 : 5.06 1718.74 6.71 0.00 0.00 73917.55 6366.35 74898.29 00:17:44.140 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x0 length 0x80000 00:17:44.140 nvme2n2 : 5.05 1950.26 7.62 0.00 0.00 65213.21 4993.22 66909.14 00:17:44.140 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x80000 length 0x80000 00:17:44.140 nvme2n2 : 5.07 1718.11 6.71 0.00 0.00 73813.47 7084.13 65411.17 00:17:44.140 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.140 Verification LBA range: start 0x0 length 0x80000 00:17:44.140 nvme2n3 : 5.05 1951.05 7.62 0.00 0.00 65057.64 5648.58 73400.32 00:17:44.141 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.141 Verification LBA range: start 0x80000 length 0x80000 00:17:44.141 nvme2n3 : 5.07 1717.48 6.71 0.00 0.00 73737.29 7926.74 69905.07 00:17:44.141 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.141 Verification LBA range: start 0x0 length 0x20000 00:17:44.141 nvme3n1 : 5.05 1949.80 7.62 0.00 0.00 64993.30 4493.90 87381.33 00:17:44.141 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.141 Verification LBA range: start 0x20000 length 0x20000 00:17:44.141 nvme3n1 : 5.07 1716.85 6.71 0.00 0.00 73716.78 7458.62 82887.44 00:17:44.141 =================================================================================================================== 00:17:44.141 Total : 24601.18 96.10 0.00 0.00 62012.87 4462.69 87381.33 00:17:45.515 00:17:45.515 real 0m7.575s 00:17:45.515 user 0m11.623s 00:17:45.515 sys 0m2.031s 00:17:45.515 19:39:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.515 19:39:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:45.515 ************************************ 00:17:45.515 END TEST bdev_verify 00:17:45.515 ************************************ 00:17:45.515 19:39:36 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:45.515 19:39:36 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:45.515 19:39:36 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:45.515 19:39:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.515 19:39:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:45.515 ************************************ 00:17:45.515 START TEST bdev_verify_big_io 00:17:45.515 ************************************ 00:17:45.515 19:39:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:45.515 [2024-07-15 19:39:36.185441] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:45.515 [2024-07-15 19:39:36.185621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77821 ] 00:17:45.775 [2024-07-15 19:39:36.372080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.034 [2024-07-15 19:39:36.624032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.034 [2024-07-15 19:39:36.624069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.601 Running I/O for 5 seconds... 00:17:53.193 00:17:53.193 Latency(us) 00:17:53.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.193 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x0 length 0xa000 00:17:53.193 nvme0n1 : 5.73 89.38 5.59 0.00 0.00 1397910.19 230686.72 1350166.43 00:17:53.193 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0xa000 length 0xa000 00:17:53.193 nvme0n1 : 6.28 81.52 5.10 0.00 0.00 1529314.74 153791.15 1829515.46 00:17:53.193 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x0 length 0xbd0b 00:17:53.193 nvme1n1 : 5.99 136.20 8.51 0.00 0.00 878088.78 67408.46 1110491.92 00:17:53.193 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:53.193 nvme1n1 : 6.25 102.41 6.40 0.00 0.00 1149667.38 10236.10 1741634.80 00:17:53.193 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x0 length 0x8000 00:17:53.193 nvme2n1 : 6.25 48.62 3.04 0.00 0.00 2378937.09 351522.62 3499247.91 00:17:53.193 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x8000 length 0x8000 00:17:53.193 nvme2n1 : 6.13 125.20 7.83 0.00 0.00 904464.42 95869.81 1198372.57 00:17:53.193 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x0 length 0x8000 00:17:53.193 nvme2n2 : 6.00 106.75 6.67 0.00 0.00 1043528.56 109351.50 1038589.56 00:17:53.193 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x8000 length 0x8000 00:17:53.193 nvme2n2 : 6.28 78.94 4.93 0.00 0.00 1374433.88 125329.80 2492614.95 00:17:53.193 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x0 length 0x8000 00:17:53.193 nvme2n3 : 6.24 123.08 7.69 0.00 0.00 864288.67 9112.62 1190383.42 00:17:53.193 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x8000 length 0x8000 00:17:53.193 nvme2n3 : 6.26 72.10 4.51 0.00 0.00 1443616.57 111848.11 2716311.16 00:17:53.193 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x0 length 0x2000 00:17:53.193 nvme3n1 : 6.25 87.04 5.44 0.00 0.00 1181318.45 3573.27 3898705.43 00:17:53.193 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.193 Verification LBA range: start 0x2000 length 0x2000 00:17:53.193 nvme3n1 : 6.29 99.14 6.20 0.00 0.00 1009998.83 5055.63 3259573.39 00:17:53.193 =================================================================================================================== 00:17:53.194 Total : 1150.38 71.90 0.00 0.00 1174017.11 3573.27 3898705.43 00:17:54.567 00:17:54.567 real 0m9.275s 00:17:54.567 user 0m16.680s 00:17:54.567 sys 0m0.532s 00:17:54.567 19:39:45 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.567 19:39:45 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.567 ************************************ 00:17:54.567 END TEST bdev_verify_big_io 00:17:54.567 ************************************ 00:17:54.825 19:39:45 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:54.825 19:39:45 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:54.825 19:39:45 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:54.825 19:39:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.825 19:39:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.825 ************************************ 00:17:54.825 START TEST bdev_write_zeroes 00:17:54.825 ************************************ 00:17:54.825 19:39:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:54.825 [2024-07-15 19:39:45.524071] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:54.825 [2024-07-15 19:39:45.524363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77943 ] 00:17:55.083 [2024-07-15 19:39:45.720482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.340 [2024-07-15 19:39:45.973246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.905 Running I/O for 1 seconds... 00:17:56.839 00:17:56.839 Latency(us) 00:17:56.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.839 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.839 nvme0n1 : 1.01 14187.01 55.42 0.00 0.00 9013.77 7302.58 12795.12 00:17:56.839 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.839 nvme1n1 : 1.01 18030.91 70.43 0.00 0.00 7075.83 3666.90 9175.04 00:17:56.839 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.839 nvme2n1 : 1.01 14156.20 55.30 0.00 0.00 8986.11 7208.96 12295.80 00:17:56.839 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.839 nvme2n2 : 1.01 14141.04 55.24 0.00 0.00 8990.11 7489.83 12295.80 00:17:56.839 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.839 nvme2n3 : 1.01 14125.86 55.18 0.00 0.00 8992.29 7552.24 12358.22 00:17:56.839 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.839 nvme3n1 : 1.02 14110.13 55.12 0.00 0.00 8995.65 7427.41 12420.63 00:17:56.839 =================================================================================================================== 00:17:56.839 Total : 88751.15 346.68 0.00 0.00 8606.17 3666.90 12795.12 00:17:58.213 00:17:58.213 real 0m3.574s 00:17:58.213 user 0m2.746s 00:17:58.213 sys 0m0.657s 00:17:58.213 19:39:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.213 19:39:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:58.213 ************************************ 00:17:58.213 END TEST bdev_write_zeroes 00:17:58.213 ************************************ 00:17:58.471 19:39:49 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:58.471 19:39:49 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.471 19:39:49 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:58.471 19:39:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.471 19:39:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:58.471 ************************************ 00:17:58.471 START TEST bdev_json_nonenclosed 00:17:58.471 ************************************ 00:17:58.471 19:39:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.471 [2024-07-15 19:39:49.121668] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:58.471 [2024-07-15 19:39:49.121836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78004 ] 00:17:58.730 [2024-07-15 19:39:49.285629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.988 [2024-07-15 19:39:49.632204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.988 [2024-07-15 19:39:49.632338] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:58.988 [2024-07-15 19:39:49.632381] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:58.988 [2024-07-15 19:39:49.632415] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:59.555 00:17:59.555 real 0m1.135s 00:17:59.555 user 0m0.874s 00:17:59.555 sys 0m0.153s 00:17:59.555 19:39:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:17:59.555 19:39:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:59.555 19:39:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:59.555 ************************************ 00:17:59.555 END TEST bdev_json_nonenclosed 00:17:59.555 ************************************ 00:17:59.555 19:39:50 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:17:59.555 19:39:50 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:17:59.555 19:39:50 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.555 19:39:50 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:59.555 19:39:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.555 19:39:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.555 ************************************ 00:17:59.555 START TEST bdev_json_nonarray 00:17:59.555 ************************************ 00:17:59.555 19:39:50 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.555 [2024-07-15 19:39:50.337159] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:59.555 [2024-07-15 19:39:50.337328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78035 ] 00:17:59.816 [2024-07-15 19:39:50.530903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.077 [2024-07-15 19:39:50.779780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.077 [2024-07-15 19:39:50.779909] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:00.077 [2024-07-15 19:39:50.779942] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:00.077 [2024-07-15 19:39:50.779965] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.643 00:18:00.643 real 0m1.043s 00:18:00.643 user 0m0.753s 00:18:00.643 sys 0m0.183s 00:18:00.643 19:39:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:18:00.643 19:39:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.643 19:39:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.643 ************************************ 00:18:00.643 END TEST bdev_json_nonarray 00:18:00.643 ************************************ 00:18:00.643 19:39:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:00.643 19:39:51 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:01.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:03.115 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:03.115 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:03.115 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:03.115 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:03.115 00:18:03.115 real 1m7.360s 00:18:03.115 user 1m46.302s 00:18:03.115 sys 0m32.305s 00:18:03.115 19:39:53 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:03.115 ************************************ 00:18:03.115 END TEST blockdev_xnvme 00:18:03.115 ************************************ 00:18:03.115 19:39:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.374 19:39:53 -- common/autotest_common.sh@1142 -- # return 0 00:18:03.374 19:39:53 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:03.374 19:39:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:03.374 19:39:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.374 19:39:53 -- common/autotest_common.sh@10 -- # set +x 00:18:03.374 ************************************ 00:18:03.374 START TEST ublk 00:18:03.374 ************************************ 00:18:03.374 19:39:53 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:03.374 * Looking for test storage... 00:18:03.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:03.374 19:39:54 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:03.374 19:39:54 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:03.374 19:39:54 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:03.374 19:39:54 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:03.374 19:39:54 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:03.374 19:39:54 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:03.374 19:39:54 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:03.374 19:39:54 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:03.374 19:39:54 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:03.374 19:39:54 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:03.374 19:39:54 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.374 19:39:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:03.374 ************************************ 00:18:03.374 START TEST test_save_ublk_config 00:18:03.374 ************************************ 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=78328 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 78328 00:18:03.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78328 ']' 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.374 19:39:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:03.633 [2024-07-15 19:39:54.235433] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:03.633 [2024-07-15 19:39:54.235612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78328 ] 00:18:03.891 [2024-07-15 19:39:54.423959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.150 [2024-07-15 19:39:54.741333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.086 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.086 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:05.086 19:39:55 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:05.086 19:39:55 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:05.086 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.086 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:05.086 [2024-07-15 19:39:55.718805] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:05.086 [2024-07-15 19:39:55.720288] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:05.086 malloc0 00:18:05.086 [2024-07-15 19:39:55.825963] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:05.086 [2024-07-15 19:39:55.826080] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:05.086 [2024-07-15 19:39:55.826095] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:05.086 [2024-07-15 19:39:55.826108] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:05.086 [2024-07-15 19:39:55.832804] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:05.086 [2024-07-15 19:39:55.832840] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:05.086 [2024-07-15 19:39:55.843823] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:05.087 [2024-07-15 19:39:55.843945] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:05.087 [2024-07-15 19:39:55.860797] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:05.087 0 00:18:05.087 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.087 19:39:55 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:05.087 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.087 19:39:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:05.654 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.654 19:39:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:05.654 "subsystems": [ 00:18:05.654 { 00:18:05.654 "subsystem": "keyring", 00:18:05.654 "config": [] 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "subsystem": "iobuf", 00:18:05.654 "config": [ 00:18:05.654 { 00:18:05.654 "method": "iobuf_set_options", 00:18:05.654 "params": { 00:18:05.654 "small_pool_count": 8192, 00:18:05.654 "large_pool_count": 1024, 00:18:05.654 "small_bufsize": 8192, 00:18:05.654 "large_bufsize": 135168 00:18:05.654 } 00:18:05.654 } 00:18:05.654 ] 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "subsystem": "sock", 00:18:05.654 "config": [ 00:18:05.654 { 00:18:05.654 "method": "sock_set_default_impl", 00:18:05.654 "params": { 00:18:05.654 "impl_name": "posix" 00:18:05.654 } 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "method": "sock_impl_set_options", 00:18:05.654 "params": { 00:18:05.654 "impl_name": "ssl", 00:18:05.654 "recv_buf_size": 4096, 00:18:05.654 "send_buf_size": 4096, 00:18:05.654 "enable_recv_pipe": true, 00:18:05.654 "enable_quickack": false, 00:18:05.654 "enable_placement_id": 0, 00:18:05.654 "enable_zerocopy_send_server": true, 00:18:05.654 "enable_zerocopy_send_client": false, 00:18:05.654 "zerocopy_threshold": 0, 00:18:05.654 "tls_version": 0, 00:18:05.654 "enable_ktls": false 00:18:05.654 } 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "method": "sock_impl_set_options", 00:18:05.654 "params": { 00:18:05.654 "impl_name": "posix", 00:18:05.654 "recv_buf_size": 2097152, 00:18:05.654 "send_buf_size": 2097152, 00:18:05.654 "enable_recv_pipe": true, 00:18:05.654 "enable_quickack": false, 00:18:05.654 "enable_placement_id": 0, 00:18:05.654 "enable_zerocopy_send_server": true, 00:18:05.654 "enable_zerocopy_send_client": false, 00:18:05.654 "zerocopy_threshold": 0, 00:18:05.654 "tls_version": 0, 00:18:05.654 "enable_ktls": false 00:18:05.654 } 00:18:05.654 } 00:18:05.654 ] 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "subsystem": "vmd", 00:18:05.654 "config": [] 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "subsystem": "accel", 00:18:05.654 "config": [ 00:18:05.654 { 00:18:05.654 "method": "accel_set_options", 00:18:05.654 "params": { 00:18:05.654 "small_cache_size": 128, 00:18:05.654 "large_cache_size": 16, 00:18:05.654 "task_count": 2048, 00:18:05.654 "sequence_count": 2048, 00:18:05.654 "buf_count": 2048 00:18:05.654 } 00:18:05.654 } 00:18:05.654 ] 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "subsystem": "bdev", 00:18:05.654 "config": [ 00:18:05.654 { 00:18:05.654 "method": "bdev_set_options", 00:18:05.654 "params": { 00:18:05.654 "bdev_io_pool_size": 65535, 00:18:05.654 "bdev_io_cache_size": 256, 00:18:05.654 "bdev_auto_examine": true, 00:18:05.654 "iobuf_small_cache_size": 128, 00:18:05.654 "iobuf_large_cache_size": 16 00:18:05.654 } 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "method": "bdev_raid_set_options", 00:18:05.654 "params": { 00:18:05.654 "process_window_size_kb": 1024 00:18:05.654 } 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "method": "bdev_iscsi_set_options", 00:18:05.654 "params": { 00:18:05.654 "timeout_sec": 30 00:18:05.654 } 00:18:05.654 }, 00:18:05.654 { 00:18:05.654 "method": "bdev_nvme_set_options", 00:18:05.654 "params": { 00:18:05.654 "action_on_timeout": "none", 00:18:05.654 "timeout_us": 0, 00:18:05.654 "timeout_admin_us": 0, 00:18:05.654 "keep_alive_timeout_ms": 10000, 00:18:05.654 "arbitration_burst": 0, 00:18:05.654 "low_priority_weight": 0, 00:18:05.654 "medium_priority_weight": 0, 00:18:05.654 "high_priority_weight": 0, 00:18:05.654 "nvme_adminq_poll_period_us": 10000, 00:18:05.654 "nvme_ioq_poll_period_us": 0, 00:18:05.654 "io_queue_requests": 0, 00:18:05.654 "delay_cmd_submit": true, 00:18:05.654 "transport_retry_count": 4, 00:18:05.654 "bdev_retry_count": 3, 00:18:05.654 "transport_ack_timeout": 0, 00:18:05.654 "ctrlr_loss_timeout_sec": 0, 00:18:05.654 "reconnect_delay_sec": 0, 00:18:05.654 "fast_io_fail_timeout_sec": 0, 00:18:05.654 "disable_auto_failback": false, 00:18:05.654 "generate_uuids": false, 00:18:05.654 "transport_tos": 0, 00:18:05.654 "nvme_error_stat": false, 00:18:05.654 "rdma_srq_size": 0, 00:18:05.654 "io_path_stat": false, 00:18:05.654 "allow_accel_sequence": false, 00:18:05.654 "rdma_max_cq_size": 0, 00:18:05.654 "rdma_cm_event_timeout_ms": 0, 00:18:05.654 "dhchap_digests": [ 00:18:05.654 "sha256", 00:18:05.654 "sha384", 00:18:05.654 "sha512" 00:18:05.654 ], 00:18:05.654 "dhchap_dhgroups": [ 00:18:05.654 "null", 00:18:05.654 "ffdhe2048", 00:18:05.654 "ffdhe3072", 00:18:05.654 "ffdhe4096", 00:18:05.654 "ffdhe6144", 00:18:05.654 "ffdhe8192" 00:18:05.654 ] 00:18:05.655 } 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "method": "bdev_nvme_set_hotplug", 00:18:05.655 "params": { 00:18:05.655 "period_us": 100000, 00:18:05.655 "enable": false 00:18:05.655 } 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "method": "bdev_malloc_create", 00:18:05.655 "params": { 00:18:05.655 "name": "malloc0", 00:18:05.655 "num_blocks": 8192, 00:18:05.655 "block_size": 4096, 00:18:05.655 "physical_block_size": 4096, 00:18:05.655 "uuid": "307c0a0a-bed7-4133-8337-929b8d059910", 00:18:05.655 "optimal_io_boundary": 0 00:18:05.655 } 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "method": "bdev_wait_for_examine" 00:18:05.655 } 00:18:05.655 ] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "scsi", 00:18:05.655 "config": null 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "scheduler", 00:18:05.655 "config": [ 00:18:05.655 { 00:18:05.655 "method": "framework_set_scheduler", 00:18:05.655 "params": { 00:18:05.655 "name": "static" 00:18:05.655 } 00:18:05.655 } 00:18:05.655 ] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "vhost_scsi", 00:18:05.655 "config": [] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "vhost_blk", 00:18:05.655 "config": [] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "ublk", 00:18:05.655 "config": [ 00:18:05.655 { 00:18:05.655 "method": "ublk_create_target", 00:18:05.655 "params": { 00:18:05.655 "cpumask": "1" 00:18:05.655 } 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "method": "ublk_start_disk", 00:18:05.655 "params": { 00:18:05.655 "bdev_name": "malloc0", 00:18:05.655 "ublk_id": 0, 00:18:05.655 "num_queues": 1, 00:18:05.655 "queue_depth": 128 00:18:05.655 } 00:18:05.655 } 00:18:05.655 ] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "nbd", 00:18:05.655 "config": [] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "nvmf", 00:18:05.655 "config": [ 00:18:05.655 { 00:18:05.655 "method": "nvmf_set_config", 00:18:05.655 "params": { 00:18:05.655 "discovery_filter": "match_any", 00:18:05.655 "admin_cmd_passthru": { 00:18:05.655 "identify_ctrlr": false 00:18:05.655 } 00:18:05.655 } 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "method": "nvmf_set_max_subsystems", 00:18:05.655 "params": { 00:18:05.655 "max_subsystems": 1024 00:18:05.655 } 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "method": "nvmf_set_crdt", 00:18:05.655 "params": { 00:18:05.655 "crdt1": 0, 00:18:05.655 "crdt2": 0, 00:18:05.655 "crdt3": 0 00:18:05.655 } 00:18:05.655 } 00:18:05.655 ] 00:18:05.655 }, 00:18:05.655 { 00:18:05.655 "subsystem": "iscsi", 00:18:05.655 "config": [ 00:18:05.655 { 00:18:05.655 "method": "iscsi_set_options", 00:18:05.655 "params": { 00:18:05.655 "node_base": "iqn.2016-06.io.spdk", 00:18:05.655 "max_sessions": 128, 00:18:05.655 "max_connections_per_session": 2, 00:18:05.655 "max_queue_depth": 64, 00:18:05.655 "default_time2wait": 2, 00:18:05.655 "default_time2retain": 20, 00:18:05.655 "first_burst_length": 8192, 00:18:05.655 "immediate_data": true, 00:18:05.655 "allow_duplicated_isid": false, 00:18:05.655 "error_recovery_level": 0, 00:18:05.655 "nop_timeout": 60, 00:18:05.655 "nop_in_interval": 30, 00:18:05.655 "disable_chap": false, 00:18:05.655 "require_chap": false, 00:18:05.655 "mutual_chap": false, 00:18:05.655 "chap_group": 0, 00:18:05.655 "max_large_datain_per_connection": 64, 00:18:05.655 "max_r2t_per_connection": 4, 00:18:05.655 "pdu_pool_size": 36864, 00:18:05.655 "immediate_data_pool_size": 16384, 00:18:05.655 "data_out_pool_size": 2048 00:18:05.655 } 00:18:05.655 } 00:18:05.655 ] 00:18:05.655 } 00:18:05.655 ] 00:18:05.655 }' 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 78328 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78328 ']' 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78328 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78328 00:18:05.655 killing process with pid 78328 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78328' 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78328 00:18:05.655 19:39:56 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78328 00:18:07.558 [2024-07-15 19:39:57.861590] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:07.558 [2024-07-15 19:39:57.900870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:07.558 [2024-07-15 19:39:57.901115] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:07.558 [2024-07-15 19:39:57.908840] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:07.558 [2024-07-15 19:39:57.908937] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:07.558 [2024-07-15 19:39:57.908948] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:07.558 [2024-07-15 19:39:57.908982] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:07.558 [2024-07-15 19:39:57.909167] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:08.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=78399 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 78399 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78399 ']' 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:08.935 19:39:59 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:08.935 "subsystems": [ 00:18:08.935 { 00:18:08.935 "subsystem": "keyring", 00:18:08.935 "config": [] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "iobuf", 00:18:08.935 "config": [ 00:18:08.935 { 00:18:08.935 "method": "iobuf_set_options", 00:18:08.935 "params": { 00:18:08.935 "small_pool_count": 8192, 00:18:08.935 "large_pool_count": 1024, 00:18:08.935 "small_bufsize": 8192, 00:18:08.935 "large_bufsize": 135168 00:18:08.935 } 00:18:08.935 } 00:18:08.935 ] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "sock", 00:18:08.935 "config": [ 00:18:08.935 { 00:18:08.935 "method": "sock_set_default_impl", 00:18:08.935 "params": { 00:18:08.935 "impl_name": "posix" 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "sock_impl_set_options", 00:18:08.935 "params": { 00:18:08.935 "impl_name": "ssl", 00:18:08.935 "recv_buf_size": 4096, 00:18:08.935 "send_buf_size": 4096, 00:18:08.935 "enable_recv_pipe": true, 00:18:08.935 "enable_quickack": false, 00:18:08.935 "enable_placement_id": 0, 00:18:08.935 "enable_zerocopy_send_server": true, 00:18:08.935 "enable_zerocopy_send_client": false, 00:18:08.935 "zerocopy_threshold": 0, 00:18:08.935 "tls_version": 0, 00:18:08.935 "enable_ktls": false 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "sock_impl_set_options", 00:18:08.935 "params": { 00:18:08.935 "impl_name": "posix", 00:18:08.935 "recv_buf_size": 2097152, 00:18:08.935 "send_buf_size": 2097152, 00:18:08.935 "enable_recv_pipe": true, 00:18:08.935 "enable_quickack": false, 00:18:08.935 "enable_placement_id": 0, 00:18:08.935 "enable_zerocopy_send_server": true, 00:18:08.935 "enable_zerocopy_send_client": false, 00:18:08.935 "zerocopy_threshold": 0, 00:18:08.935 "tls_version": 0, 00:18:08.935 "enable_ktls": false 00:18:08.935 } 00:18:08.935 } 00:18:08.935 ] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "vmd", 00:18:08.935 "config": [] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "accel", 00:18:08.935 "config": [ 00:18:08.935 { 00:18:08.935 "method": "accel_set_options", 00:18:08.935 "params": { 00:18:08.935 "small_cache_size": 128, 00:18:08.935 "large_cache_size": 16, 00:18:08.935 "task_count": 2048, 00:18:08.935 "sequence_count": 2048, 00:18:08.935 "buf_count": 2048 00:18:08.935 } 00:18:08.935 } 00:18:08.935 ] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "bdev", 00:18:08.935 "config": [ 00:18:08.935 { 00:18:08.935 "method": "bdev_set_options", 00:18:08.935 "params": { 00:18:08.935 "bdev_io_pool_size": 65535, 00:18:08.935 "bdev_io_cache_size": 256, 00:18:08.935 "bdev_auto_examine": true, 00:18:08.935 "iobuf_small_cache_size": 128, 00:18:08.935 "iobuf_large_cache_size": 16 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "bdev_raid_set_options", 00:18:08.935 "params": { 00:18:08.935 "process_window_size_kb": 1024 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "bdev_iscsi_set_options", 00:18:08.935 "params": { 00:18:08.935 "timeout_sec": 30 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "bdev_nvme_set_options", 00:18:08.935 "params": { 00:18:08.935 "action_on_timeout": "none", 00:18:08.935 "timeout_us": 0, 00:18:08.935 "timeout_admin_us": 0, 00:18:08.935 "keep_alive_timeout_ms": 10000, 00:18:08.935 "arbitration_burst": 0, 00:18:08.935 "low_priority_weight": 0, 00:18:08.935 "medium_priority_weight": 0, 00:18:08.935 "high_priority_weight": 0, 00:18:08.935 "nvme_adminq_poll_period_us": 10000, 00:18:08.935 "nvme_ioq_poll_period_us": 0, 00:18:08.935 "io_queue_requests": 0, 00:18:08.935 "delay_cmd_submit": true, 00:18:08.935 "transport_retry_count": 4, 00:18:08.935 "bdev_retry_count": 3, 00:18:08.935 "transport_ack_timeout": 0, 00:18:08.935 "ctrlr_loss_timeout_sec": 0, 00:18:08.935 "reconnect_delay_sec": 0, 00:18:08.935 "fast_io_fail_timeout_sec": 0, 00:18:08.935 "disable_auto_failback": false, 00:18:08.935 "generate_uuids": false, 00:18:08.935 "transport_tos": 0, 00:18:08.935 "nvme_error_stat": false, 00:18:08.935 "rdma_srq_size": 0, 00:18:08.935 "io_path_stat": false, 00:18:08.935 "allow_accel_sequence": false, 00:18:08.935 "rdma_max_cq_size": 0, 00:18:08.935 "rdma_cm_event_timeout_ms": 0, 00:18:08.935 "dhchap_digests": [ 00:18:08.935 "sha256", 00:18:08.935 "sha384", 00:18:08.935 "sha512" 00:18:08.935 ], 00:18:08.935 "dhchap_dhgroups": [ 00:18:08.935 "null", 00:18:08.935 "ffdhe2048", 00:18:08.935 "ffdhe3072", 00:18:08.935 "ffdhe4096", 00:18:08.935 "ffdhe6144", 00:18:08.935 "ffdhe8192" 00:18:08.935 ] 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "bdev_nvme_set_hotplug", 00:18:08.935 "params": { 00:18:08.935 "period_us": 100000, 00:18:08.935 "enable": false 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "bdev_malloc_create", 00:18:08.935 "params": { 00:18:08.935 "name": "malloc0", 00:18:08.935 "num_blocks": 8192, 00:18:08.935 "block_size": 4096, 00:18:08.935 "physical_block_size": 4096, 00:18:08.935 "uuid": "307c0a0a-bed7-4133-8337-929b8d059910", 00:18:08.935 "optimal_io_boundary": 0 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "bdev_wait_for_examine" 00:18:08.935 } 00:18:08.935 ] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "scsi", 00:18:08.935 "config": null 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "scheduler", 00:18:08.935 "config": [ 00:18:08.935 { 00:18:08.935 "method": "framework_set_scheduler", 00:18:08.935 "params": { 00:18:08.935 "name": "static" 00:18:08.935 } 00:18:08.935 } 00:18:08.935 ] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "vhost_scsi", 00:18:08.935 "config": [] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "vhost_blk", 00:18:08.935 "config": [] 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "subsystem": "ublk", 00:18:08.935 "config": [ 00:18:08.935 { 00:18:08.935 "method": "ublk_create_target", 00:18:08.935 "params": { 00:18:08.935 "cpumask": "1" 00:18:08.935 } 00:18:08.935 }, 00:18:08.935 { 00:18:08.935 "method": "ublk_start_disk", 00:18:08.935 "params": { 00:18:08.935 "bdev_name": "malloc0", 00:18:08.935 "ublk_id": 0, 00:18:08.935 "num_queues": 1, 00:18:08.935 "queue_depth": 128 00:18:08.935 } 00:18:08.935 } 00:18:08.935 ] 00:18:08.935 }, 00:18:08.936 { 00:18:08.936 "subsystem": "nbd", 00:18:08.936 "config": [] 00:18:08.936 }, 00:18:08.936 { 00:18:08.936 "subsystem": "nvmf", 00:18:08.936 "config": [ 00:18:08.936 { 00:18:08.936 "method": "nvmf_set_config", 00:18:08.936 "params": { 00:18:08.936 "discovery_filter": "match_any", 00:18:08.936 "admin_cmd_passthru": { 00:18:08.936 "identify_ctrlr": false 00:18:08.936 } 00:18:08.936 } 00:18:08.936 }, 00:18:08.936 { 00:18:08.936 "method": "nvmf_set_max_subsystems", 00:18:08.936 "params": { 00:18:08.936 "max_subsystems": 1024 00:18:08.936 } 00:18:08.936 }, 00:18:08.936 { 00:18:08.936 "method": "nvmf_set_crdt", 00:18:08.936 "params": { 00:18:08.936 "crdt1": 0, 00:18:08.936 "crdt2": 0, 00:18:08.936 "crdt3": 0 00:18:08.936 } 00:18:08.936 } 00:18:08.936 ] 00:18:08.936 }, 00:18:08.936 { 00:18:08.936 "subsystem": "iscsi", 00:18:08.936 "config": [ 00:18:08.936 { 00:18:08.936 "method": "iscsi_set_options", 00:18:08.936 "params": { 00:18:08.936 "node_base": "iqn.2016-06.io.spdk", 00:18:08.936 "max_sessions": 128, 00:18:08.936 "max_connections_per_session": 2, 00:18:08.936 "max_queue_depth": 64, 00:18:08.936 "default_time2wait": 2, 00:18:08.936 "default_time2retain": 20, 00:18:08.936 "fir 19:39:59 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:08.936 st_burst_length": 8192, 00:18:08.936 "immediate_data": true, 00:18:08.936 "allow_duplicated_isid": false, 00:18:08.936 "error_recovery_level": 0, 00:18:08.936 "nop_timeout": 60, 00:18:08.936 "nop_in_interval": 30, 00:18:08.936 "disable_chap": false, 00:18:08.936 "require_chap": false, 00:18:08.936 "mutual_chap": false, 00:18:08.936 "chap_group": 0, 00:18:08.936 "max_large_datain_per_connection": 64, 00:18:08.936 "max_r2t_per_connection": 4, 00:18:08.936 "pdu_pool_size": 36864, 00:18:08.936 "immediate_data_pool_size": 16384, 00:18:08.936 "data_out_pool_size": 2048 00:18:08.936 } 00:18:08.936 } 00:18:08.936 ] 00:18:08.936 } 00:18:08.936 ] 00:18:08.936 }' 00:18:08.936 [2024-07-15 19:39:59.683133] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:08.936 [2024-07-15 19:39:59.683293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78399 ] 00:18:09.195 [2024-07-15 19:39:59.856735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.454 [2024-07-15 19:40:00.206564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.843 [2024-07-15 19:40:01.442877] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:10.843 [2024-07-15 19:40:01.444235] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:10.843 [2024-07-15 19:40:01.453914] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:10.843 [2024-07-15 19:40:01.454005] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:10.843 [2024-07-15 19:40:01.454018] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:10.843 [2024-07-15 19:40:01.454026] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:10.843 [2024-07-15 19:40:01.469823] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:10.843 [2024-07-15 19:40:01.469851] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:10.843 [2024-07-15 19:40:01.477814] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:10.843 [2024-07-15 19:40:01.477948] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:10.843 [2024-07-15 19:40:01.501815] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 78399 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78399 ']' 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78399 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.843 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78399 00:18:11.110 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:11.110 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:11.110 killing process with pid 78399 00:18:11.110 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78399' 00:18:11.110 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78399 00:18:11.110 19:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78399 00:18:13.011 [2024-07-15 19:40:03.299152] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.011 [2024-07-15 19:40:03.324891] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.011 [2024-07-15 19:40:03.325071] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.011 [2024-07-15 19:40:03.332819] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.011 [2024-07-15 19:40:03.332873] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:13.011 [2024-07-15 19:40:03.332882] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:13.011 [2024-07-15 19:40:03.332909] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:13.011 [2024-07-15 19:40:03.333101] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:14.385 19:40:04 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:14.385 00:18:14.385 real 0m10.820s 00:18:14.385 user 0m9.567s 00:18:14.385 sys 0m2.282s 00:18:14.385 19:40:04 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.385 ************************************ 00:18:14.385 19:40:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:14.385 END TEST test_save_ublk_config 00:18:14.385 ************************************ 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:14.385 19:40:04 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78492 00:18:14.385 19:40:04 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.385 19:40:04 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:14.385 19:40:04 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78492 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@829 -- # '[' -z 78492 ']' 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.385 19:40:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.385 [2024-07-15 19:40:05.078953] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:14.385 [2024-07-15 19:40:05.079134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78492 ] 00:18:14.643 [2024-07-15 19:40:05.268662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.901 [2024-07-15 19:40:05.574473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.901 [2024-07-15 19:40:05.574506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.898 19:40:06 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.898 19:40:06 ublk -- common/autotest_common.sh@862 -- # return 0 00:18:15.898 19:40:06 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:15.898 19:40:06 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:15.898 19:40:06 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.898 19:40:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.898 ************************************ 00:18:15.898 START TEST test_create_ublk 00:18:15.898 ************************************ 00:18:15.898 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:18:15.898 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:15.898 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.898 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.898 [2024-07-15 19:40:06.544799] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:15.898 [2024-07-15 19:40:06.547754] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:15.898 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.898 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:15.898 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:15.898 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.898 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.156 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.156 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:16.156 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:16.156 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.156 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.156 [2024-07-15 19:40:06.886986] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:16.156 [2024-07-15 19:40:06.887459] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:16.156 [2024-07-15 19:40:06.887481] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:16.156 [2024-07-15 19:40:06.887494] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:16.156 [2024-07-15 19:40:06.895158] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:16.156 [2024-07-15 19:40:06.895187] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:16.156 [2024-07-15 19:40:06.902815] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:16.156 [2024-07-15 19:40:06.914018] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:16.156 [2024-07-15 19:40:06.928816] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:16.156 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.156 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:16.156 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:16.156 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:16.156 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.156 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.413 19:40:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.413 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:16.413 { 00:18:16.413 "ublk_device": "/dev/ublkb0", 00:18:16.413 "id": 0, 00:18:16.413 "queue_depth": 512, 00:18:16.413 "num_queues": 4, 00:18:16.413 "bdev_name": "Malloc0" 00:18:16.413 } 00:18:16.413 ]' 00:18:16.413 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:16.413 19:40:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:16.413 19:40:07 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:16.413 19:40:07 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:16.671 fio: verification read phase will never start because write phase uses all of runtime 00:18:16.671 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:16.671 fio-3.35 00:18:16.671 Starting 1 process 00:18:26.700 00:18:26.700 fio_test: (groupid=0, jobs=1): err= 0: pid=78544: Mon Jul 15 19:40:17 2024 00:18:26.700 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(540MiB/10001msec); 0 zone resets 00:18:26.700 clat (usec): min=45, max=4040, avg=71.21, stdev=107.15 00:18:26.700 lat (usec): min=45, max=4041, avg=71.82, stdev=107.21 00:18:26.700 clat percentiles (usec): 00:18:26.700 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 00:18:26.700 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 68], 00:18:26.700 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 78], 95.00th=[ 83], 00:18:26.700 | 99.00th=[ 95], 99.50th=[ 106], 99.90th=[ 2245], 99.95th=[ 3032], 00:18:26.700 | 99.99th=[ 3687] 00:18:26.700 bw ( KiB/s): min=45120, max=60488, per=100.00%, avg=55536.42, stdev=3618.31, samples=19 00:18:26.700 iops : min=11280, max=15122, avg=13884.11, stdev=904.58, samples=19 00:18:26.700 lat (usec) : 50=0.81%, 100=98.55%, 250=0.36%, 500=0.06%, 750=0.01% 00:18:26.700 lat (usec) : 1000=0.02% 00:18:26.700 lat (msec) : 2=0.07%, 4=0.11%, 10=0.01% 00:18:26.700 cpu : usr=3.93%, sys=11.36%, ctx=138300, majf=0, minf=795 00:18:26.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:26.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.700 issued rwts: total=0,138308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:26.700 00:18:26.700 Run status group 0 (all jobs): 00:18:26.700 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=540MiB (567MB), run=10001-10001msec 00:18:26.700 00:18:26.700 Disk stats (read/write): 00:18:26.700 ublkb0: ios=0/136977, merge=0/0, ticks=0/8419, in_queue=8419, util=99.02% 00:18:26.700 19:40:17 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:26.700 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.700 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.700 [2024-07-15 19:40:17.452517] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:26.958 [2024-07-15 19:40:17.497373] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:26.958 [2024-07-15 19:40:17.498864] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:26.958 [2024-07-15 19:40:17.504803] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:26.958 [2024-07-15 19:40:17.505185] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:26.958 [2024-07-15 19:40:17.505207] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.958 19:40:17 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.958 [2024-07-15 19:40:17.519985] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:26.958 request: 00:18:26.958 { 00:18:26.958 "ublk_id": 0, 00:18:26.958 "method": "ublk_stop_disk", 00:18:26.958 "req_id": 1 00:18:26.958 } 00:18:26.958 Got JSON-RPC error response 00:18:26.958 response: 00:18:26.958 { 00:18:26.958 "code": -19, 00:18:26.958 "message": "No such device" 00:18:26.958 } 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:26.958 19:40:17 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.958 [2024-07-15 19:40:17.535951] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:26.958 [2024-07-15 19:40:17.543842] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:26.958 [2024-07-15 19:40:17.543903] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.958 19:40:17 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.958 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.217 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.217 19:40:17 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:27.217 19:40:17 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:27.217 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.217 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.217 19:40:17 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.217 19:40:17 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:27.217 19:40:17 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:27.476 19:40:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:27.476 19:40:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:27.476 19:40:18 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.476 19:40:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.476 19:40:18 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.476 19:40:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:27.476 19:40:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:27.476 19:40:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:27.476 00:18:27.476 real 0m11.542s 00:18:27.476 user 0m0.797s 00:18:27.476 sys 0m1.275s 00:18:27.476 19:40:18 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:27.476 19:40:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.476 ************************************ 00:18:27.476 END TEST test_create_ublk 00:18:27.476 ************************************ 00:18:27.476 19:40:18 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:27.476 19:40:18 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:27.476 19:40:18 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:27.476 19:40:18 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.476 19:40:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.476 ************************************ 00:18:27.476 START TEST test_create_multi_ublk 00:18:27.476 ************************************ 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.476 [2024-07-15 19:40:18.149807] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:27.476 [2024-07-15 19:40:18.153044] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.476 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.770 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.770 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:27.770 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:27.770 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.770 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.770 [2024-07-15 19:40:18.517030] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:27.770 [2024-07-15 19:40:18.517612] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:27.770 [2024-07-15 19:40:18.517636] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:27.770 [2024-07-15 19:40:18.517646] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:27.770 [2024-07-15 19:40:18.524856] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:27.770 [2024-07-15 19:40:18.524892] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:27.770 [2024-07-15 19:40:18.532850] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:27.770 [2024-07-15 19:40:18.533636] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:28.029 [2024-07-15 19:40:18.564847] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:28.029 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.029 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:28.029 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.029 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:28.029 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.029 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.287 [2024-07-15 19:40:18.943985] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:28.287 [2024-07-15 19:40:18.944488] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:28.287 [2024-07-15 19:40:18.944508] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:28.287 [2024-07-15 19:40:18.944520] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:28.287 [2024-07-15 19:40:18.954875] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:28.287 [2024-07-15 19:40:18.954919] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:28.287 [2024-07-15 19:40:18.962846] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:28.287 [2024-07-15 19:40:18.963621] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:28.287 [2024-07-15 19:40:18.967544] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.287 19:40:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.546 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.546 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:28.546 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:28.546 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.546 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.546 [2024-07-15 19:40:19.326972] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:28.546 [2024-07-15 19:40:19.327488] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:28.546 [2024-07-15 19:40:19.327515] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:28.546 [2024-07-15 19:40:19.327525] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:28.546 [2024-07-15 19:40:19.334852] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:28.546 [2024-07-15 19:40:19.334888] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:28.804 [2024-07-15 19:40:19.342844] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:28.804 [2024-07-15 19:40:19.343645] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:28.804 [2024-07-15 19:40:19.351905] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:28.804 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.804 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:28.804 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.804 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:28.804 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.804 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.063 [2024-07-15 19:40:19.729007] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:29.063 [2024-07-15 19:40:19.729546] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:29.063 [2024-07-15 19:40:19.729566] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:29.063 [2024-07-15 19:40:19.729579] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:29.063 [2024-07-15 19:40:19.736866] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:29.063 [2024-07-15 19:40:19.736912] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:29.063 [2024-07-15 19:40:19.744843] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:29.063 [2024-07-15 19:40:19.745633] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:29.063 [2024-07-15 19:40:19.748867] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:29.063 { 00:18:29.063 "ublk_device": "/dev/ublkb0", 00:18:29.063 "id": 0, 00:18:29.063 "queue_depth": 512, 00:18:29.063 "num_queues": 4, 00:18:29.063 "bdev_name": "Malloc0" 00:18:29.063 }, 00:18:29.063 { 00:18:29.063 "ublk_device": "/dev/ublkb1", 00:18:29.063 "id": 1, 00:18:29.063 "queue_depth": 512, 00:18:29.063 "num_queues": 4, 00:18:29.063 "bdev_name": "Malloc1" 00:18:29.063 }, 00:18:29.063 { 00:18:29.063 "ublk_device": "/dev/ublkb2", 00:18:29.063 "id": 2, 00:18:29.063 "queue_depth": 512, 00:18:29.063 "num_queues": 4, 00:18:29.063 "bdev_name": "Malloc2" 00:18:29.063 }, 00:18:29.063 { 00:18:29.063 "ublk_device": "/dev/ublkb3", 00:18:29.063 "id": 3, 00:18:29.063 "queue_depth": 512, 00:18:29.063 "num_queues": 4, 00:18:29.063 "bdev_name": "Malloc3" 00:18:29.063 } 00:18:29.063 ]' 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.063 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:29.322 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:29.322 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:29.322 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:29.322 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:29.322 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:29.322 19:40:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:29.322 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:29.580 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:29.839 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.097 [2024-07-15 19:40:20.688013] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:30.097 [2024-07-15 19:40:20.725857] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:30.097 [2024-07-15 19:40:20.727337] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:30.097 [2024-07-15 19:40:20.733824] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:30.097 [2024-07-15 19:40:20.734190] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:30.097 [2024-07-15 19:40:20.734208] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.097 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.098 [2024-07-15 19:40:20.749948] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:30.098 [2024-07-15 19:40:20.782382] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:30.098 [2024-07-15 19:40:20.787236] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:30.098 [2024-07-15 19:40:20.794870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:30.098 [2024-07-15 19:40:20.795265] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:30.098 [2024-07-15 19:40:20.795277] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.098 [2024-07-15 19:40:20.803977] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:30.098 [2024-07-15 19:40:20.835364] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:30.098 [2024-07-15 19:40:20.841253] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:30.098 [2024-07-15 19:40:20.848860] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:30.098 [2024-07-15 19:40:20.849221] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:30.098 [2024-07-15 19:40:20.849239] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.098 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.098 [2024-07-15 19:40:20.866976] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:30.356 [2024-07-15 19:40:20.898884] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:30.356 [2024-07-15 19:40:20.903241] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:30.356 [2024-07-15 19:40:20.914918] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:30.356 [2024-07-15 19:40:20.915276] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:30.356 [2024-07-15 19:40:20.915293] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:30.356 19:40:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.356 19:40:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:30.356 [2024-07-15 19:40:21.110954] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:30.356 [2024-07-15 19:40:21.119047] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:30.356 [2024-07-15 19:40:21.119109] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:30.356 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:30.356 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.356 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:30.356 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.356 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.922 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.922 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.922 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:30.922 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.922 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.489 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.489 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:31.489 19:40:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:31.489 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.489 19:40:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.748 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.748 19:40:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:31.748 19:40:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:31.748 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.748 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:32.316 00:18:32.316 real 0m4.780s 00:18:32.316 user 0m1.061s 00:18:32.316 sys 0m0.207s 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.316 19:40:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.316 ************************************ 00:18:32.316 END TEST test_create_multi_ublk 00:18:32.316 ************************************ 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:32.316 19:40:22 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:32.316 19:40:22 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:32.316 19:40:22 ublk -- ublk/ublk.sh@130 -- # killprocess 78492 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@948 -- # '[' -z 78492 ']' 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@952 -- # kill -0 78492 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@953 -- # uname 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78492 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.316 killing process with pid 78492 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78492' 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@967 -- # kill 78492 00:18:32.316 19:40:22 ublk -- common/autotest_common.sh@972 -- # wait 78492 00:18:33.695 [2024-07-15 19:40:24.273169] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:33.695 [2024-07-15 19:40:24.273268] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:35.072 00:18:35.072 real 0m31.748s 00:18:35.072 user 0m47.267s 00:18:35.072 sys 0m8.850s 00:18:35.072 19:40:25 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.072 19:40:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.072 ************************************ 00:18:35.072 END TEST ublk 00:18:35.072 ************************************ 00:18:35.072 19:40:25 -- common/autotest_common.sh@1142 -- # return 0 00:18:35.072 19:40:25 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:35.072 19:40:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:35.072 19:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.072 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:18:35.072 ************************************ 00:18:35.072 START TEST ublk_recovery 00:18:35.072 ************************************ 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:35.072 * Looking for test storage... 00:18:35.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:35.072 19:40:25 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:35.072 19:40:25 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:35.072 19:40:25 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:35.072 19:40:25 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78894 00:18:35.072 19:40:25 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:35.072 19:40:25 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.072 19:40:25 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78894 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78894 ']' 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.072 19:40:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.332 [2024-07-15 19:40:25.955841] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:35.332 [2024-07-15 19:40:25.955983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78894 ] 00:18:35.591 [2024-07-15 19:40:26.129362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.849 [2024-07-15 19:40:26.461054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.849 [2024-07-15 19:40:26.461084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:36.784 19:40:27 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.784 [2024-07-15 19:40:27.534846] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:36.784 [2024-07-15 19:40:27.538702] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.784 19:40:27 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.784 19:40:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.042 malloc0 00:18:37.042 19:40:27 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.042 19:40:27 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:37.042 19:40:27 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.043 19:40:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.043 [2024-07-15 19:40:27.735019] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:37.043 [2024-07-15 19:40:27.735164] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:37.043 [2024-07-15 19:40:27.735179] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:37.043 [2024-07-15 19:40:27.735191] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:37.043 [2024-07-15 19:40:27.747075] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:37.043 [2024-07-15 19:40:27.747130] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:37.043 [2024-07-15 19:40:27.757818] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:37.043 [2024-07-15 19:40:27.758020] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:37.043 [2024-07-15 19:40:27.772861] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:37.043 1 00:18:37.043 19:40:27 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.043 19:40:27 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:38.435 19:40:28 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78935 00:18:38.435 19:40:28 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:38.435 19:40:28 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:38.435 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:38.435 fio-3.35 00:18:38.435 Starting 1 process 00:18:43.811 19:40:33 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78894 00:18:43.811 19:40:33 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:49.079 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78894 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:49.079 19:40:38 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=79045 00:18:49.079 19:40:38 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:49.079 19:40:38 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.079 19:40:38 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 79045 00:18:49.079 19:40:38 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 79045 ']' 00:18:49.079 19:40:38 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.079 19:40:38 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.079 19:40:38 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.079 19:40:38 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.079 19:40:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.079 [2024-07-15 19:40:38.944680] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:49.079 [2024-07-15 19:40:38.944845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79045 ] 00:18:49.079 [2024-07-15 19:40:39.110175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:49.079 [2024-07-15 19:40:39.386756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.079 [2024-07-15 19:40:39.386815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:50.012 19:40:40 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.012 [2024-07-15 19:40:40.537393] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:50.012 [2024-07-15 19:40:40.544961] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.012 19:40:40 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.012 malloc0 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.012 19:40:40 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.012 [2024-07-15 19:40:40.753029] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:50.012 [2024-07-15 19:40:40.753093] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:50.012 [2024-07-15 19:40:40.753108] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:50.012 [2024-07-15 19:40:40.760911] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:50.012 [2024-07-15 19:40:40.760949] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:50.012 1 00:18:50.012 [2024-07-15 19:40:40.761073] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:50.012 19:40:40 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.012 19:40:40 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78935 00:18:50.012 [2024-07-15 19:40:40.768842] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:50.012 [2024-07-15 19:40:40.776057] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:50.012 [2024-07-15 19:40:40.784041] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:50.012 [2024-07-15 19:40:40.784078] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:46.227 00:19:46.227 fio_test: (groupid=0, jobs=1): err= 0: pid=78938: Mon Jul 15 19:41:29 2024 00:19:46.227 read: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(4619MiB/60001msec) 00:19:46.227 slat (nsec): min=1949, max=344874, avg=6823.07, stdev=2368.60 00:19:46.227 clat (usec): min=928, max=6997.6k, avg=3165.36, stdev=49811.19 00:19:46.227 lat (usec): min=934, max=6997.6k, avg=3172.18, stdev=49811.19 00:19:46.227 clat percentiles (usec): 00:19:46.227 | 1.00th=[ 2147], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:19:46.227 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2737], 00:19:46.227 | 70.00th=[ 2835], 80.00th=[ 2999], 90.00th=[ 3359], 95.00th=[ 4047], 00:19:46.227 | 99.00th=[ 5538], 99.50th=[ 6259], 99.90th=[ 8160], 99.95th=[ 8848], 00:19:46.227 | 99.99th=[13042] 00:19:46.227 bw ( KiB/s): min=19912, max=103144, per=100.00%, avg=88473.87, stdev=12151.27, samples=106 00:19:46.227 iops : min= 4978, max=25786, avg=22118.44, stdev=3037.83, samples=106 00:19:46.227 write: IOPS=19.7k, BW=76.9MiB/s (80.7MB/s)(4617MiB/60001msec); 0 zone resets 00:19:46.227 slat (usec): min=2, max=1713, avg= 6.84, stdev= 2.85 00:19:46.227 clat (usec): min=936, max=6997.6k, avg=3316.26, stdev=53041.31 00:19:46.227 lat (usec): min=943, max=6997.6k, avg=3323.09, stdev=53041.30 00:19:46.227 clat percentiles (usec): 00:19:46.227 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2442], 20.00th=[ 2540], 00:19:46.227 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2868], 00:19:46.227 | 70.00th=[ 2933], 80.00th=[ 3097], 90.00th=[ 3458], 95.00th=[ 4015], 00:19:46.227 | 99.00th=[ 5473], 99.50th=[ 6325], 99.90th=[ 8094], 99.95th=[ 8979], 00:19:46.227 | 99.99th=[13173] 00:19:46.227 bw ( KiB/s): min=19760, max=101880, per=100.00%, avg=88422.07, stdev=12048.29, samples=106 00:19:46.227 iops : min= 4940, max=25470, avg=22105.50, stdev=3012.08, samples=106 00:19:46.227 lat (usec) : 1000=0.01% 00:19:46.227 lat (msec) : 2=0.39%, 4=94.48%, 10=5.11%, 20=0.02%, >=2000=0.01% 00:19:46.227 cpu : usr=10.22%, sys=25.84%, ctx=83502, majf=0, minf=13 00:19:46.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:46.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:46.227 issued rwts: total=1182492,1181888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:46.227 00:19:46.227 Run status group 0 (all jobs): 00:19:46.227 READ: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=4619MiB (4843MB), run=60001-60001msec 00:19:46.227 WRITE: bw=76.9MiB/s (80.7MB/s), 76.9MiB/s-76.9MiB/s (80.7MB/s-80.7MB/s), io=4617MiB (4841MB), run=60001-60001msec 00:19:46.227 00:19:46.227 Disk stats (read/write): 00:19:46.227 ublkb1: ios=1179824/1179145, merge=0/0, ticks=3644308/3686185, in_queue=7330494, util=99.93% 00:19:46.227 19:41:29 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.227 [2024-07-15 19:41:29.061581] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:46.227 [2024-07-15 19:41:29.089019] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:46.227 [2024-07-15 19:41:29.089356] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:46.227 [2024-07-15 19:41:29.096885] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:46.227 [2024-07-15 19:41:29.097051] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:46.227 [2024-07-15 19:41:29.097065] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.227 19:41:29 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.227 [2024-07-15 19:41:29.115934] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:46.227 [2024-07-15 19:41:29.122833] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:46.227 [2024-07-15 19:41:29.122880] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.227 19:41:29 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:46.227 19:41:29 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:46.227 19:41:29 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 79045 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 79045 ']' 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 79045 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79045 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:46.227 killing process with pid 79045 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79045' 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@967 -- # kill 79045 00:19:46.227 19:41:29 ublk_recovery -- common/autotest_common.sh@972 -- # wait 79045 00:19:46.227 [2024-07-15 19:41:30.412727] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:46.227 [2024-07-15 19:41:30.412803] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:46.227 00:19:46.227 real 1m6.415s 00:19:46.227 user 1m50.938s 00:19:46.227 sys 0m32.181s 00:19:46.227 19:41:32 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.227 19:41:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.227 ************************************ 00:19:46.227 END TEST ublk_recovery 00:19:46.227 ************************************ 00:19:46.227 19:41:32 -- common/autotest_common.sh@1142 -- # return 0 00:19:46.227 19:41:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:46.227 19:41:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.227 19:41:32 -- common/autotest_common.sh@10 -- # set +x 00:19:46.227 19:41:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:19:46.227 19:41:32 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:46.227 19:41:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:46.227 19:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.227 19:41:32 -- common/autotest_common.sh@10 -- # set +x 00:19:46.227 ************************************ 00:19:46.227 START TEST ftl 00:19:46.227 ************************************ 00:19:46.227 19:41:32 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:46.227 * Looking for test storage... 00:19:46.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:46.227 19:41:32 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:46.227 19:41:32 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:46.227 19:41:32 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:46.227 19:41:32 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:46.227 19:41:32 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:46.227 19:41:32 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.227 19:41:32 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:46.227 19:41:32 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:46.227 19:41:32 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.227 19:41:32 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.227 19:41:32 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:46.227 19:41:32 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:46.227 19:41:32 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:46.227 19:41:32 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:46.227 19:41:32 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:46.227 19:41:32 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:46.227 19:41:32 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.227 19:41:32 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.227 19:41:32 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:46.227 19:41:32 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:46.227 19:41:32 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:46.227 19:41:32 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:46.227 19:41:32 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:46.227 19:41:32 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:46.227 19:41:32 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:46.227 19:41:32 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:46.227 19:41:32 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.227 19:41:32 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:46.227 19:41:32 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:46.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.228 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:46.228 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:46.228 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:46.228 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:46.228 19:41:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79833 00:19:46.228 19:41:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79833 00:19:46.228 19:41:33 ftl -- common/autotest_common.sh@829 -- # '[' -z 79833 ']' 00:19:46.228 19:41:33 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.228 19:41:33 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.228 19:41:33 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.228 19:41:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:46.228 19:41:33 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.228 19:41:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:46.228 [2024-07-15 19:41:33.188162] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:46.228 [2024-07-15 19:41:33.188367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79833 ] 00:19:46.228 [2024-07-15 19:41:33.381954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.228 [2024-07-15 19:41:33.719850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.228 19:41:34 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.228 19:41:34 ftl -- common/autotest_common.sh@862 -- # return 0 00:19:46.228 19:41:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:46.228 19:41:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:46.228 19:41:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:46.228 19:41:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@50 -- # break 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@63 -- # break 00:19:46.228 19:41:36 ftl -- ftl/ftl.sh@66 -- # killprocess 79833 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@948 -- # '[' -z 79833 ']' 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@952 -- # kill -0 79833 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@953 -- # uname 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79833 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:46.228 killing process with pid 79833 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79833' 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@967 -- # kill 79833 00:19:46.228 19:41:36 ftl -- common/autotest_common.sh@972 -- # wait 79833 00:19:49.513 19:41:39 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:49.513 19:41:39 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:49.513 19:41:39 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:49.513 19:41:39 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.513 19:41:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:49.513 ************************************ 00:19:49.513 START TEST ftl_fio_basic 00:19:49.513 ************************************ 00:19:49.513 19:41:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:49.513 * Looking for test storage... 00:19:49.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79985 00:19:49.513 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79985 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79985 ']' 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.514 19:41:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:49.514 [2024-07-15 19:41:40.171140] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:49.514 [2024-07-15 19:41:40.171324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79985 ] 00:19:49.771 [2024-07-15 19:41:40.361241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:50.030 [2024-07-15 19:41:40.673863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.030 [2024-07-15 19:41:40.673938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.030 [2024-07-15 19:41:40.673969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.965 19:41:41 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.965 19:41:41 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:19:50.965 19:41:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:50.965 19:41:41 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:50.965 19:41:41 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:50.966 19:41:41 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:50.966 19:41:41 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:50.966 19:41:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:51.533 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:51.792 { 00:19:51.792 "name": "nvme0n1", 00:19:51.792 "aliases": [ 00:19:51.792 "5dd034d2-31f8-4e77-8938-23d2a4394ea8" 00:19:51.792 ], 00:19:51.792 "product_name": "NVMe disk", 00:19:51.792 "block_size": 4096, 00:19:51.792 "num_blocks": 1310720, 00:19:51.792 "uuid": "5dd034d2-31f8-4e77-8938-23d2a4394ea8", 00:19:51.792 "assigned_rate_limits": { 00:19:51.792 "rw_ios_per_sec": 0, 00:19:51.792 "rw_mbytes_per_sec": 0, 00:19:51.792 "r_mbytes_per_sec": 0, 00:19:51.792 "w_mbytes_per_sec": 0 00:19:51.792 }, 00:19:51.792 "claimed": false, 00:19:51.792 "zoned": false, 00:19:51.792 "supported_io_types": { 00:19:51.792 "read": true, 00:19:51.792 "write": true, 00:19:51.792 "unmap": true, 00:19:51.792 "flush": true, 00:19:51.792 "reset": true, 00:19:51.792 "nvme_admin": true, 00:19:51.792 "nvme_io": true, 00:19:51.792 "nvme_io_md": false, 00:19:51.792 "write_zeroes": true, 00:19:51.792 "zcopy": false, 00:19:51.792 "get_zone_info": false, 00:19:51.792 "zone_management": false, 00:19:51.792 "zone_append": false, 00:19:51.792 "compare": true, 00:19:51.792 "compare_and_write": false, 00:19:51.792 "abort": true, 00:19:51.792 "seek_hole": false, 00:19:51.792 "seek_data": false, 00:19:51.792 "copy": true, 00:19:51.792 "nvme_iov_md": false 00:19:51.792 }, 00:19:51.792 "driver_specific": { 00:19:51.792 "nvme": [ 00:19:51.792 { 00:19:51.792 "pci_address": "0000:00:11.0", 00:19:51.792 "trid": { 00:19:51.792 "trtype": "PCIe", 00:19:51.792 "traddr": "0000:00:11.0" 00:19:51.792 }, 00:19:51.792 "ctrlr_data": { 00:19:51.792 "cntlid": 0, 00:19:51.792 "vendor_id": "0x1b36", 00:19:51.792 "model_number": "QEMU NVMe Ctrl", 00:19:51.792 "serial_number": "12341", 00:19:51.792 "firmware_revision": "8.0.0", 00:19:51.792 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:51.792 "oacs": { 00:19:51.792 "security": 0, 00:19:51.792 "format": 1, 00:19:51.792 "firmware": 0, 00:19:51.792 "ns_manage": 1 00:19:51.792 }, 00:19:51.792 "multi_ctrlr": false, 00:19:51.792 "ana_reporting": false 00:19:51.792 }, 00:19:51.792 "vs": { 00:19:51.792 "nvme_version": "1.4" 00:19:51.792 }, 00:19:51.792 "ns_data": { 00:19:51.792 "id": 1, 00:19:51.792 "can_share": false 00:19:51.792 } 00:19:51.792 } 00:19:51.792 ], 00:19:51.792 "mp_policy": "active_passive" 00:19:51.792 } 00:19:51.792 } 00:19:51.792 ]' 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:51.792 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:52.051 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:52.051 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:52.310 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=96b0f1e2-b936-4e16-81a4-3af81b1e6e6a 00:19:52.310 19:41:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 96b0f1e2-b936-4e16-81a4-3af81b1e6e6a 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:52.569 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:52.828 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:52.828 { 00:19:52.828 "name": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:52.828 "aliases": [ 00:19:52.828 "lvs/nvme0n1p0" 00:19:52.828 ], 00:19:52.828 "product_name": "Logical Volume", 00:19:52.828 "block_size": 4096, 00:19:52.828 "num_blocks": 26476544, 00:19:52.828 "uuid": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:52.828 "assigned_rate_limits": { 00:19:52.828 "rw_ios_per_sec": 0, 00:19:52.828 "rw_mbytes_per_sec": 0, 00:19:52.828 "r_mbytes_per_sec": 0, 00:19:52.828 "w_mbytes_per_sec": 0 00:19:52.828 }, 00:19:52.828 "claimed": false, 00:19:52.828 "zoned": false, 00:19:52.828 "supported_io_types": { 00:19:52.828 "read": true, 00:19:52.828 "write": true, 00:19:52.828 "unmap": true, 00:19:52.828 "flush": false, 00:19:52.828 "reset": true, 00:19:52.828 "nvme_admin": false, 00:19:52.828 "nvme_io": false, 00:19:52.828 "nvme_io_md": false, 00:19:52.828 "write_zeroes": true, 00:19:52.828 "zcopy": false, 00:19:52.828 "get_zone_info": false, 00:19:52.828 "zone_management": false, 00:19:52.828 "zone_append": false, 00:19:52.828 "compare": false, 00:19:52.828 "compare_and_write": false, 00:19:52.828 "abort": false, 00:19:52.828 "seek_hole": true, 00:19:52.828 "seek_data": true, 00:19:52.828 "copy": false, 00:19:52.828 "nvme_iov_md": false 00:19:52.828 }, 00:19:52.828 "driver_specific": { 00:19:52.828 "lvol": { 00:19:52.828 "lvol_store_uuid": "96b0f1e2-b936-4e16-81a4-3af81b1e6e6a", 00:19:52.828 "base_bdev": "nvme0n1", 00:19:52.828 "thin_provision": true, 00:19:52.828 "num_allocated_clusters": 0, 00:19:52.828 "snapshot": false, 00:19:52.828 "clone": false, 00:19:52.828 "esnap_clone": false 00:19:52.828 } 00:19:52.828 } 00:19:52.828 } 00:19:52.828 ]' 00:19:52.828 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:52.828 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:52.828 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:53.086 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:53.086 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:53.086 19:41:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:53.086 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:53.086 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:53.086 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:53.378 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:53.378 19:41:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:53.378 19:41:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:53.378 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:53.378 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:53.378 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:53.378 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:53.378 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:53.638 { 00:19:53.638 "name": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:53.638 "aliases": [ 00:19:53.638 "lvs/nvme0n1p0" 00:19:53.638 ], 00:19:53.638 "product_name": "Logical Volume", 00:19:53.638 "block_size": 4096, 00:19:53.638 "num_blocks": 26476544, 00:19:53.638 "uuid": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:53.638 "assigned_rate_limits": { 00:19:53.638 "rw_ios_per_sec": 0, 00:19:53.638 "rw_mbytes_per_sec": 0, 00:19:53.638 "r_mbytes_per_sec": 0, 00:19:53.638 "w_mbytes_per_sec": 0 00:19:53.638 }, 00:19:53.638 "claimed": false, 00:19:53.638 "zoned": false, 00:19:53.638 "supported_io_types": { 00:19:53.638 "read": true, 00:19:53.638 "write": true, 00:19:53.638 "unmap": true, 00:19:53.638 "flush": false, 00:19:53.638 "reset": true, 00:19:53.638 "nvme_admin": false, 00:19:53.638 "nvme_io": false, 00:19:53.638 "nvme_io_md": false, 00:19:53.638 "write_zeroes": true, 00:19:53.638 "zcopy": false, 00:19:53.638 "get_zone_info": false, 00:19:53.638 "zone_management": false, 00:19:53.638 "zone_append": false, 00:19:53.638 "compare": false, 00:19:53.638 "compare_and_write": false, 00:19:53.638 "abort": false, 00:19:53.638 "seek_hole": true, 00:19:53.638 "seek_data": true, 00:19:53.638 "copy": false, 00:19:53.638 "nvme_iov_md": false 00:19:53.638 }, 00:19:53.638 "driver_specific": { 00:19:53.638 "lvol": { 00:19:53.638 "lvol_store_uuid": "96b0f1e2-b936-4e16-81a4-3af81b1e6e6a", 00:19:53.638 "base_bdev": "nvme0n1", 00:19:53.638 "thin_provision": true, 00:19:53.638 "num_allocated_clusters": 0, 00:19:53.638 "snapshot": false, 00:19:53.638 "clone": false, 00:19:53.638 "esnap_clone": false 00:19:53.638 } 00:19:53.638 } 00:19:53.638 } 00:19:53.638 ]' 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:53.638 19:41:44 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:53.896 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:53.896 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:54.154 { 00:19:54.154 "name": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:54.154 "aliases": [ 00:19:54.154 "lvs/nvme0n1p0" 00:19:54.154 ], 00:19:54.154 "product_name": "Logical Volume", 00:19:54.154 "block_size": 4096, 00:19:54.154 "num_blocks": 26476544, 00:19:54.154 "uuid": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:54.154 "assigned_rate_limits": { 00:19:54.154 "rw_ios_per_sec": 0, 00:19:54.154 "rw_mbytes_per_sec": 0, 00:19:54.154 "r_mbytes_per_sec": 0, 00:19:54.154 "w_mbytes_per_sec": 0 00:19:54.154 }, 00:19:54.154 "claimed": false, 00:19:54.154 "zoned": false, 00:19:54.154 "supported_io_types": { 00:19:54.154 "read": true, 00:19:54.154 "write": true, 00:19:54.154 "unmap": true, 00:19:54.154 "flush": false, 00:19:54.154 "reset": true, 00:19:54.154 "nvme_admin": false, 00:19:54.154 "nvme_io": false, 00:19:54.154 "nvme_io_md": false, 00:19:54.154 "write_zeroes": true, 00:19:54.154 "zcopy": false, 00:19:54.154 "get_zone_info": false, 00:19:54.154 "zone_management": false, 00:19:54.154 "zone_append": false, 00:19:54.154 "compare": false, 00:19:54.154 "compare_and_write": false, 00:19:54.154 "abort": false, 00:19:54.154 "seek_hole": true, 00:19:54.154 "seek_data": true, 00:19:54.154 "copy": false, 00:19:54.154 "nvme_iov_md": false 00:19:54.154 }, 00:19:54.154 "driver_specific": { 00:19:54.154 "lvol": { 00:19:54.154 "lvol_store_uuid": "96b0f1e2-b936-4e16-81a4-3af81b1e6e6a", 00:19:54.154 "base_bdev": "nvme0n1", 00:19:54.154 "thin_provision": true, 00:19:54.154 "num_allocated_clusters": 0, 00:19:54.154 "snapshot": false, 00:19:54.154 "clone": false, 00:19:54.154 "esnap_clone": false 00:19:54.154 } 00:19:54.154 } 00:19:54.154 } 00:19:54.154 ]' 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:54.154 19:41:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0a01665f-ab5a-43e2-bb02-954ebfaa4ea4 -c nvc0n1p0 --l2p_dram_limit 60 00:19:54.414 [2024-07-15 19:41:45.010426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.010489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.414 [2024-07-15 19:41:45.010506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:54.414 [2024-07-15 19:41:45.010521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.010599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.010615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.414 [2024-07-15 19:41:45.010627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:54.414 [2024-07-15 19:41:45.010640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.010672] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.414 [2024-07-15 19:41:45.011878] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.414 [2024-07-15 19:41:45.011914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.011933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.414 [2024-07-15 19:41:45.011946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:19:54.414 [2024-07-15 19:41:45.011959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.012075] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3e3c4c6e-20be-4de4-bd4b-5e6a378a1c48 00:19:54.414 [2024-07-15 19:41:45.013553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.013588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:54.414 [2024-07-15 19:41:45.013604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:54.414 [2024-07-15 19:41:45.013616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.021365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.021401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.414 [2024-07-15 19:41:45.021421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.667 ms 00:19:54.414 [2024-07-15 19:41:45.021433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.021564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.021581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.414 [2024-07-15 19:41:45.021595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:54.414 [2024-07-15 19:41:45.021606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.021689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.021702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.414 [2024-07-15 19:41:45.021716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:54.414 [2024-07-15 19:41:45.021729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.021767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.414 [2024-07-15 19:41:45.027675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.027739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.414 [2024-07-15 19:41:45.027753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.919 ms 00:19:54.414 [2024-07-15 19:41:45.027766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.027836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.027852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.414 [2024-07-15 19:41:45.027864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:54.414 [2024-07-15 19:41:45.027877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.414 [2024-07-15 19:41:45.027946] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:54.414 [2024-07-15 19:41:45.028116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.414 [2024-07-15 19:41:45.028139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.414 [2024-07-15 19:41:45.028160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:54.414 [2024-07-15 19:41:45.028175] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.414 [2024-07-15 19:41:45.028191] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.414 [2024-07-15 19:41:45.028203] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:54.414 [2024-07-15 19:41:45.028218] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.414 [2024-07-15 19:41:45.028229] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.414 [2024-07-15 19:41:45.028245] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.414 [2024-07-15 19:41:45.028256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.414 [2024-07-15 19:41:45.028270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.414 [2024-07-15 19:41:45.028281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:54.414 [2024-07-15 19:41:45.028295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.415 [2024-07-15 19:41:45.028390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.415 [2024-07-15 19:41:45.028415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.415 [2024-07-15 19:41:45.028426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:54.415 [2024-07-15 19:41:45.028438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.415 [2024-07-15 19:41:45.028560] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.415 [2024-07-15 19:41:45.028584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.415 [2024-07-15 19:41:45.028596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.415 [2024-07-15 19:41:45.028634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.415 [2024-07-15 19:41:45.028667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.415 [2024-07-15 19:41:45.028690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.415 [2024-07-15 19:41:45.028702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:54.415 [2024-07-15 19:41:45.028712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.415 [2024-07-15 19:41:45.028727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.415 [2024-07-15 19:41:45.028737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:54.415 [2024-07-15 19:41:45.028749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.415 [2024-07-15 19:41:45.028774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.415 [2024-07-15 19:41:45.028822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.415 [2024-07-15 19:41:45.028856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.415 [2024-07-15 19:41:45.028888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.415 [2024-07-15 19:41:45.028923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.415 [2024-07-15 19:41:45.028945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.415 [2024-07-15 19:41:45.028955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:54.415 [2024-07-15 19:41:45.028970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.415 [2024-07-15 19:41:45.028980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.415 [2024-07-15 19:41:45.028995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:54.415 [2024-07-15 19:41:45.029005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.415 [2024-07-15 19:41:45.029017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.415 [2024-07-15 19:41:45.029027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:54.415 [2024-07-15 19:41:45.029042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.029052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.415 [2024-07-15 19:41:45.029064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:54.415 [2024-07-15 19:41:45.029074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.029086] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.415 [2024-07-15 19:41:45.029097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.415 [2024-07-15 19:41:45.029128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.415 [2024-07-15 19:41:45.029138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.415 [2024-07-15 19:41:45.029152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.415 [2024-07-15 19:41:45.029162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.415 [2024-07-15 19:41:45.029178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.415 [2024-07-15 19:41:45.029188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.415 [2024-07-15 19:41:45.029208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.415 [2024-07-15 19:41:45.029219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.415 [2024-07-15 19:41:45.029235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.415 [2024-07-15 19:41:45.029249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:54.415 [2024-07-15 19:41:45.029278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:54.415 [2024-07-15 19:41:45.029292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:54.415 [2024-07-15 19:41:45.029303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:54.415 [2024-07-15 19:41:45.029317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:54.415 [2024-07-15 19:41:45.029328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:54.415 [2024-07-15 19:41:45.029343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:54.415 [2024-07-15 19:41:45.029354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:54.415 [2024-07-15 19:41:45.029368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:54.415 [2024-07-15 19:41:45.029379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:54.415 [2024-07-15 19:41:45.029448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.415 [2024-07-15 19:41:45.029460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.415 [2024-07-15 19:41:45.029487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.415 [2024-07-15 19:41:45.029500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.415 [2024-07-15 19:41:45.029512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.415 [2024-07-15 19:41:45.029526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.415 [2024-07-15 19:41:45.029537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.415 [2024-07-15 19:41:45.029551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:19:54.415 [2024-07-15 19:41:45.029562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.415 [2024-07-15 19:41:45.029642] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:54.415 [2024-07-15 19:41:45.029657] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:57.700 [2024-07-15 19:41:48.182653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.182722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:57.700 [2024-07-15 19:41:48.182742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3152.988 ms 00:19:57.700 [2024-07-15 19:41:48.182771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.230490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.230563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.700 [2024-07-15 19:41:48.230582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.396 ms 00:19:57.700 [2024-07-15 19:41:48.230594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.230766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.230788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.700 [2024-07-15 19:41:48.230803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:57.700 [2024-07-15 19:41:48.230813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.294406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.294493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.700 [2024-07-15 19:41:48.294520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.509 ms 00:19:57.700 [2024-07-15 19:41:48.294537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.294617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.294634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.700 [2024-07-15 19:41:48.294654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:57.700 [2024-07-15 19:41:48.294670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.295283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.295318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.700 [2024-07-15 19:41:48.295340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:19:57.700 [2024-07-15 19:41:48.295355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.295551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.295580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.700 [2024-07-15 19:41:48.295600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:57.700 [2024-07-15 19:41:48.295616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.322594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.322657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.700 [2024-07-15 19:41:48.322676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.931 ms 00:19:57.700 [2024-07-15 19:41:48.322688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.337894] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.700 [2024-07-15 19:41:48.355620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.355699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.700 [2024-07-15 19:41:48.355715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.767 ms 00:19:57.700 [2024-07-15 19:41:48.355728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.424415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.424504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:57.700 [2024-07-15 19:41:48.424520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.616 ms 00:19:57.700 [2024-07-15 19:41:48.424534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.424792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.424809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.700 [2024-07-15 19:41:48.424821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:57.700 [2024-07-15 19:41:48.424837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.700 [2024-07-15 19:41:48.468105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.700 [2024-07-15 19:41:48.468183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:57.700 [2024-07-15 19:41:48.468200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.176 ms 00:19:57.700 [2024-07-15 19:41:48.468214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.958 [2024-07-15 19:41:48.509345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.958 [2024-07-15 19:41:48.509423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:57.958 [2024-07-15 19:41:48.509441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.061 ms 00:19:57.958 [2024-07-15 19:41:48.509455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.958 [2024-07-15 19:41:48.510411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.958 [2024-07-15 19:41:48.510454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.958 [2024-07-15 19:41:48.510468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:19:57.958 [2024-07-15 19:41:48.510482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.958 [2024-07-15 19:41:48.623495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.958 [2024-07-15 19:41:48.623591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:57.958 [2024-07-15 19:41:48.623609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.919 ms 00:19:57.958 [2024-07-15 19:41:48.623644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.958 [2024-07-15 19:41:48.670931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.958 [2024-07-15 19:41:48.671029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:57.958 [2024-07-15 19:41:48.671048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.209 ms 00:19:57.958 [2024-07-15 19:41:48.671063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.958 [2024-07-15 19:41:48.717153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.958 [2024-07-15 19:41:48.717226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:57.958 [2024-07-15 19:41:48.717242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.012 ms 00:19:57.958 [2024-07-15 19:41:48.717256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.216 [2024-07-15 19:41:48.758387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.216 [2024-07-15 19:41:48.758454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:58.216 [2024-07-15 19:41:48.758469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.064 ms 00:19:58.216 [2024-07-15 19:41:48.758483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.216 [2024-07-15 19:41:48.758555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.216 [2024-07-15 19:41:48.758575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:58.216 [2024-07-15 19:41:48.758587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:58.216 [2024-07-15 19:41:48.758603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.216 [2024-07-15 19:41:48.758733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.216 [2024-07-15 19:41:48.758752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:58.216 [2024-07-15 19:41:48.758764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:58.216 [2024-07-15 19:41:48.758793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.216 [2024-07-15 19:41:48.760051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3749.027 ms, result 0 00:19:58.216 { 00:19:58.216 "name": "ftl0", 00:19:58.216 "uuid": "3e3c4c6e-20be-4de4-bd4b-5e6a378a1c48" 00:19:58.216 } 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:58.216 19:41:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:58.473 [ 00:19:58.473 { 00:19:58.473 "name": "ftl0", 00:19:58.473 "aliases": [ 00:19:58.473 "3e3c4c6e-20be-4de4-bd4b-5e6a378a1c48" 00:19:58.473 ], 00:19:58.473 "product_name": "FTL disk", 00:19:58.473 "block_size": 4096, 00:19:58.473 "num_blocks": 20971520, 00:19:58.473 "uuid": "3e3c4c6e-20be-4de4-bd4b-5e6a378a1c48", 00:19:58.473 "assigned_rate_limits": { 00:19:58.473 "rw_ios_per_sec": 0, 00:19:58.473 "rw_mbytes_per_sec": 0, 00:19:58.473 "r_mbytes_per_sec": 0, 00:19:58.473 "w_mbytes_per_sec": 0 00:19:58.473 }, 00:19:58.473 "claimed": false, 00:19:58.473 "zoned": false, 00:19:58.473 "supported_io_types": { 00:19:58.473 "read": true, 00:19:58.473 "write": true, 00:19:58.473 "unmap": true, 00:19:58.473 "flush": true, 00:19:58.473 "reset": false, 00:19:58.473 "nvme_admin": false, 00:19:58.473 "nvme_io": false, 00:19:58.473 "nvme_io_md": false, 00:19:58.473 "write_zeroes": true, 00:19:58.473 "zcopy": false, 00:19:58.473 "get_zone_info": false, 00:19:58.473 "zone_management": false, 00:19:58.473 "zone_append": false, 00:19:58.473 "compare": false, 00:19:58.473 "compare_and_write": false, 00:19:58.473 "abort": false, 00:19:58.473 "seek_hole": false, 00:19:58.473 "seek_data": false, 00:19:58.473 "copy": false, 00:19:58.473 "nvme_iov_md": false 00:19:58.473 }, 00:19:58.473 "driver_specific": { 00:19:58.473 "ftl": { 00:19:58.473 "base_bdev": "0a01665f-ab5a-43e2-bb02-954ebfaa4ea4", 00:19:58.473 "cache": "nvc0n1p0" 00:19:58.473 } 00:19:58.473 } 00:19:58.473 } 00:19:58.473 ] 00:19:58.473 19:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:19:58.473 19:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:58.473 19:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:58.730 19:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:58.730 19:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:58.989 [2024-07-15 19:41:49.568481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.568537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.989 [2024-07-15 19:41:49.568560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.989 [2024-07-15 19:41:49.568571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.568609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.989 [2024-07-15 19:41:49.572517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.572559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.989 [2024-07-15 19:41:49.572573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.889 ms 00:19:58.989 [2024-07-15 19:41:49.572585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.573059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.573091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.989 [2024-07-15 19:41:49.573103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:19:58.989 [2024-07-15 19:41:49.573119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.575685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.575711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.989 [2024-07-15 19:41:49.575723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:19:58.989 [2024-07-15 19:41:49.575736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.580854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.580894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:58.989 [2024-07-15 19:41:49.580906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.091 ms 00:19:58.989 [2024-07-15 19:41:49.580918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.618683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.618742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.989 [2024-07-15 19:41:49.618757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.688 ms 00:19:58.989 [2024-07-15 19:41:49.618770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.642470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.642534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.989 [2024-07-15 19:41:49.642549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.635 ms 00:19:58.989 [2024-07-15 19:41:49.642562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.642798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.642816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.989 [2024-07-15 19:41:49.642828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:19:58.989 [2024-07-15 19:41:49.642840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.683429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.683481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:58.989 [2024-07-15 19:41:49.683495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.563 ms 00:19:58.989 [2024-07-15 19:41:49.683508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.728175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.728246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:58.989 [2024-07-15 19:41:49.728264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.611 ms 00:19:58.989 [2024-07-15 19:41:49.728279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.989 [2024-07-15 19:41:49.775541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.989 [2024-07-15 19:41:49.775612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.989 [2024-07-15 19:41:49.775630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.192 ms 00:19:58.989 [2024-07-15 19:41:49.775645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.249 [2024-07-15 19:41:49.825065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.249 [2024-07-15 19:41:49.825140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:59.249 [2024-07-15 19:41:49.825158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.250 ms 00:19:59.249 [2024-07-15 19:41:49.825173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.249 [2024-07-15 19:41:49.825268] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:59.249 [2024-07-15 19:41:49.825299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.825989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:59.249 [2024-07-15 19:41:49.826090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:59.250 [2024-07-15 19:41:49.826830] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:59.250 [2024-07-15 19:41:49.826842] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e3c4c6e-20be-4de4-bd4b-5e6a378a1c48 00:19:59.250 [2024-07-15 19:41:49.826858] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:59.250 [2024-07-15 19:41:49.826874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:59.250 [2024-07-15 19:41:49.826892] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:59.250 [2024-07-15 19:41:49.826904] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:59.250 [2024-07-15 19:41:49.826919] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:59.250 [2024-07-15 19:41:49.826931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:59.250 [2024-07-15 19:41:49.826946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:59.250 [2024-07-15 19:41:49.826957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:59.250 [2024-07-15 19:41:49.826970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:59.250 [2024-07-15 19:41:49.826983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.250 [2024-07-15 19:41:49.826998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:59.250 [2024-07-15 19:41:49.827011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:19:59.250 [2024-07-15 19:41:49.827025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.250 [2024-07-15 19:41:49.852264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.250 [2024-07-15 19:41:49.852332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:59.250 [2024-07-15 19:41:49.852349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.151 ms 00:19:59.250 [2024-07-15 19:41:49.852365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.250 [2024-07-15 19:41:49.853054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.250 [2024-07-15 19:41:49.853086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:59.250 [2024-07-15 19:41:49.853100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:19:59.250 [2024-07-15 19:41:49.853115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.250 [2024-07-15 19:41:49.941088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.250 [2024-07-15 19:41:49.941160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.250 [2024-07-15 19:41:49.941178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.250 [2024-07-15 19:41:49.941193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.250 [2024-07-15 19:41:49.941281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.250 [2024-07-15 19:41:49.941298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.250 [2024-07-15 19:41:49.941311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.250 [2024-07-15 19:41:49.941325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.250 [2024-07-15 19:41:49.941460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.250 [2024-07-15 19:41:49.941481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.250 [2024-07-15 19:41:49.941494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.250 [2024-07-15 19:41:49.941508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.250 [2024-07-15 19:41:49.941538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.250 [2024-07-15 19:41:49.941557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.250 [2024-07-15 19:41:49.941569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.250 [2024-07-15 19:41:49.941583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.508 [2024-07-15 19:41:50.097729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.508 [2024-07-15 19:41:50.097812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.508 [2024-07-15 19:41:50.097834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.508 [2024-07-15 19:41:50.097856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.508 [2024-07-15 19:41:50.222029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.508 [2024-07-15 19:41:50.222107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.508 [2024-07-15 19:41:50.222128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.508 [2024-07-15 19:41:50.222153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.508 [2024-07-15 19:41:50.222269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.508 [2024-07-15 19:41:50.222294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.508 [2024-07-15 19:41:50.222311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.508 [2024-07-15 19:41:50.222329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.508 [2024-07-15 19:41:50.222425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.508 [2024-07-15 19:41:50.222450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.508 [2024-07-15 19:41:50.222466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.508 [2024-07-15 19:41:50.222485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.508 [2024-07-15 19:41:50.222647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.508 [2024-07-15 19:41:50.222673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.508 [2024-07-15 19:41:50.222699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.508 [2024-07-15 19:41:50.222718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.508 [2024-07-15 19:41:50.222814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.508 [2024-07-15 19:41:50.222838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:59.508 [2024-07-15 19:41:50.222854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.509 [2024-07-15 19:41:50.222872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.509 [2024-07-15 19:41:50.222935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.509 [2024-07-15 19:41:50.222955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.509 [2024-07-15 19:41:50.222974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.509 [2024-07-15 19:41:50.222992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.509 [2024-07-15 19:41:50.223067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.509 [2024-07-15 19:41:50.223089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.509 [2024-07-15 19:41:50.223102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.509 [2024-07-15 19:41:50.223116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.509 [2024-07-15 19:41:50.223299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.791 ms, result 0 00:19:59.509 true 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79985 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79985 ']' 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79985 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79985 00:19:59.509 killing process with pid 79985 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79985' 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79985 00:19:59.509 19:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79985 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:06.067 19:41:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:06.067 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:06.067 fio-3.35 00:20:06.067 Starting 1 thread 00:20:11.362 00:20:11.362 test: (groupid=0, jobs=1): err= 0: pid=80212: Mon Jul 15 19:42:01 2024 00:20:11.362 read: IOPS=1016, BW=67.5MiB/s (70.8MB/s)(255MiB/3771msec) 00:20:11.362 slat (nsec): min=4314, max=41447, avg=7106.36, stdev=3151.75 00:20:11.362 clat (usec): min=282, max=2271, avg=435.00, stdev=71.69 00:20:11.362 lat (usec): min=288, max=2277, avg=442.10, stdev=72.15 00:20:11.362 clat percentiles (usec): 00:20:11.362 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 388], 00:20:11.362 | 30.00th=[ 400], 40.00th=[ 412], 50.00th=[ 429], 60.00th=[ 453], 00:20:11.362 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[ 545], 00:20:11.362 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 766], 99.95th=[ 938], 00:20:11.362 | 99.99th=[ 2278] 00:20:11.362 write: IOPS=1023, BW=68.0MiB/s (71.3MB/s)(256MiB/3768msec); 0 zone resets 00:20:11.362 slat (nsec): min=16451, max=89143, avg=22150.49, stdev=5753.00 00:20:11.362 clat (usec): min=305, max=1062, avg=503.73, stdev=78.32 00:20:11.362 lat (usec): min=324, max=1091, avg=525.88, stdev=79.10 00:20:11.362 clat percentiles (usec): 00:20:11.362 | 1.00th=[ 363], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 437], 00:20:11.362 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 510], 00:20:11.362 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 627], 00:20:11.362 | 99.00th=[ 807], 99.50th=[ 857], 99.90th=[ 1004], 99.95th=[ 1045], 00:20:11.362 | 99.99th=[ 1057] 00:20:11.362 bw ( KiB/s): min=66776, max=74120, per=99.36%, avg=69146.29, stdev=2575.46, samples=7 00:20:11.362 iops : min= 982, max= 1090, avg=1016.86, stdev=37.87, samples=7 00:20:11.362 lat (usec) : 500=69.84%, 750=29.41%, 1000=0.70% 00:20:11.362 lat (msec) : 2=0.04%, 4=0.01% 00:20:11.362 cpu : usr=99.28%, sys=0.08%, ctx=9, majf=0, minf=1171 00:20:11.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.362 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:11.362 00:20:11.362 Run status group 0 (all jobs): 00:20:11.362 READ: bw=67.5MiB/s (70.8MB/s), 67.5MiB/s-67.5MiB/s (70.8MB/s-70.8MB/s), io=255MiB (267MB), run=3771-3771msec 00:20:11.362 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=256MiB (269MB), run=3768-3768msec 00:20:12.736 ----------------------------------------------------- 00:20:12.736 Suppressions used: 00:20:12.736 count bytes template 00:20:12.736 1 5 /usr/src/fio/parse.c 00:20:12.736 1 8 libtcmalloc_minimal.so 00:20:12.736 1 904 libcrypto.so 00:20:12.736 ----------------------------------------------------- 00:20:12.736 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:12.736 19:42:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:12.994 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:12.994 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:12.994 fio-3.35 00:20:12.994 Starting 2 threads 00:20:45.082 00:20:45.082 first_half: (groupid=0, jobs=1): err= 0: pid=80321: Mon Jul 15 19:42:31 2024 00:20:45.082 read: IOPS=2454, BW=9818KiB/s (10.1MB/s)(255MiB/26553msec) 00:20:45.082 slat (nsec): min=3767, max=47841, avg=6649.50, stdev=2030.10 00:20:45.082 clat (usec): min=678, max=303664, avg=35394.99, stdev=17532.16 00:20:45.082 lat (usec): min=687, max=303671, avg=35401.64, stdev=17532.44 00:20:45.082 clat percentiles (msec): 00:20:45.082 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 34], 20.00th=[ 34], 00:20:45.082 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:20:45.082 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 39], 95.00th=[ 43], 00:20:45.082 | 99.00th=[ 116], 99.50th=[ 174], 99.90th=[ 243], 99.95th=[ 271], 00:20:45.082 | 99.99th=[ 292] 00:20:45.082 write: IOPS=3204, BW=12.5MiB/s (13.1MB/s)(256MiB/20451msec); 0 zone resets 00:20:45.082 slat (usec): min=4, max=379, avg= 9.43, stdev= 5.49 00:20:45.082 clat (usec): min=416, max=109104, avg=16617.56, stdev=26553.76 00:20:45.082 lat (usec): min=450, max=109112, avg=16626.99, stdev=26553.93 00:20:45.082 clat percentiles (usec): 00:20:45.082 | 1.00th=[ 775], 5.00th=[ 996], 10.00th=[ 1139], 20.00th=[ 1401], 00:20:45.082 | 30.00th=[ 1647], 40.00th=[ 2180], 50.00th=[ 5342], 60.00th=[ 6783], 00:20:45.082 | 70.00th=[ 11731], 80.00th=[ 17433], 90.00th=[ 74974], 95.00th=[ 84411], 00:20:45.082 | 99.00th=[ 94897], 99.50th=[ 96994], 99.90th=[104334], 99.95th=[107480], 00:20:45.082 | 99.99th=[108528] 00:20:45.082 bw ( KiB/s): min= 864, max=38744, per=78.65%, avg=20164.92, stdev=10062.49, samples=26 00:20:45.082 iops : min= 216, max= 9686, avg=5041.23, stdev=2515.62, samples=26 00:20:45.082 lat (usec) : 500=0.01%, 750=0.32%, 1000=2.30% 00:20:45.082 lat (msec) : 2=16.76%, 4=3.91%, 10=13.64%, 20=7.19%, 50=47.96% 00:20:45.082 lat (msec) : 100=7.13%, 250=0.74%, 500=0.04% 00:20:45.082 cpu : usr=99.25%, sys=0.16%, ctx=90, majf=0, minf=5535 00:20:45.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:45.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.082 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.082 issued rwts: total=65177,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.082 second_half: (groupid=0, jobs=1): err= 0: pid=80322: Mon Jul 15 19:42:31 2024 00:20:45.082 read: IOPS=2478, BW=9914KiB/s (10.2MB/s)(254MiB/26277msec) 00:20:45.082 slat (nsec): min=3742, max=38615, avg=6668.85, stdev=1980.77 00:20:45.082 clat (usec): min=825, max=311035, avg=36773.21, stdev=15009.99 00:20:45.082 lat (usec): min=833, max=311043, avg=36779.88, stdev=15010.13 00:20:45.082 clat percentiles (msec): 00:20:45.082 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:20:45.082 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:20:45.082 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 41], 95.00th=[ 45], 00:20:45.082 | 99.00th=[ 114], 99.50th=[ 146], 99.90th=[ 190], 99.95th=[ 220], 00:20:45.082 | 99.99th=[ 305] 00:20:45.082 write: IOPS=4433, BW=17.3MiB/s (18.2MB/s)(256MiB/14781msec); 0 zone resets 00:20:45.082 slat (usec): min=4, max=1430, avg= 9.31, stdev= 8.00 00:20:45.082 clat (usec): min=515, max=108853, avg=14699.93, stdev=26062.33 00:20:45.082 lat (usec): min=521, max=108861, avg=14709.24, stdev=26062.43 00:20:45.082 clat percentiles (usec): 00:20:45.082 | 1.00th=[ 955], 5.00th=[ 1172], 10.00th=[ 1319], 20.00th=[ 1549], 00:20:45.082 | 30.00th=[ 1745], 40.00th=[ 2024], 50.00th=[ 3326], 60.00th=[ 5669], 00:20:45.082 | 70.00th=[ 8979], 80.00th=[ 13566], 90.00th=[ 73925], 95.00th=[ 84411], 00:20:45.082 | 99.00th=[ 94897], 99.50th=[ 96994], 99.90th=[102237], 99.95th=[104334], 00:20:45.082 | 99.99th=[106431] 00:20:45.082 bw ( KiB/s): min= 4912, max=46144, per=92.97%, avg=23834.45, stdev=10748.64, samples=22 00:20:45.082 iops : min= 1228, max=11536, avg=5958.59, stdev=2687.13, samples=22 00:20:45.082 lat (usec) : 750=0.06%, 1000=0.68% 00:20:45.082 lat (msec) : 2=19.21%, 4=7.31%, 10=9.49%, 20=8.05%, 50=47.23% 00:20:45.082 lat (msec) : 100=7.15%, 250=0.81%, 500=0.01% 00:20:45.082 cpu : usr=99.24%, sys=0.16%, ctx=39, majf=0, minf=5586 00:20:45.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:45.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.082 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.082 issued rwts: total=65129,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.082 00:20:45.082 Run status group 0 (all jobs): 00:20:45.082 READ: bw=19.2MiB/s (20.1MB/s), 9818KiB/s-9914KiB/s (10.1MB/s-10.2MB/s), io=509MiB (534MB), run=26277-26553msec 00:20:45.082 WRITE: bw=25.0MiB/s (26.3MB/s), 12.5MiB/s-17.3MiB/s (13.1MB/s-18.2MB/s), io=512MiB (537MB), run=14781-20451msec 00:20:45.082 ----------------------------------------------------- 00:20:45.082 Suppressions used: 00:20:45.082 count bytes template 00:20:45.082 2 10 /usr/src/fio/parse.c 00:20:45.082 3 288 /usr/src/fio/iolog.c 00:20:45.082 1 8 libtcmalloc_minimal.so 00:20:45.082 1 904 libcrypto.so 00:20:45.082 ----------------------------------------------------- 00:20:45.082 00:20:45.082 19:42:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:45.082 19:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.082 19:42:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:45.082 19:42:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:45.082 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:45.082 fio-3.35 00:20:45.082 Starting 1 thread 00:20:59.956 00:20:59.956 test: (groupid=0, jobs=1): err= 0: pid=80658: Mon Jul 15 19:42:50 2024 00:20:59.956 read: IOPS=6948, BW=27.1MiB/s (28.5MB/s)(255MiB/9384msec) 00:20:59.956 slat (nsec): min=3763, max=79921, avg=6026.32, stdev=1908.43 00:20:59.956 clat (usec): min=705, max=34836, avg=18411.37, stdev=1203.44 00:20:59.956 lat (usec): min=710, max=34844, avg=18417.39, stdev=1203.49 00:20:59.956 clat percentiles (usec): 00:20:59.956 | 1.00th=[16909], 5.00th=[17171], 10.00th=[17433], 20.00th=[17695], 00:20:59.956 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:20:59.956 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[20841], 00:20:59.956 | 99.00th=[22414], 99.50th=[23462], 99.90th=[26084], 99.95th=[30278], 00:20:59.956 | 99.99th=[34341] 00:20:59.956 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(256MiB/5131msec); 0 zone resets 00:20:59.956 slat (usec): min=4, max=1107, avg= 8.72, stdev= 8.25 00:20:59.956 clat (usec): min=502, max=61661, avg=9970.30, stdev=12366.92 00:20:59.956 lat (usec): min=512, max=61670, avg=9979.03, stdev=12366.94 00:20:59.956 clat percentiles (usec): 00:20:59.956 | 1.00th=[ 873], 5.00th=[ 1045], 10.00th=[ 1156], 20.00th=[ 1352], 00:20:59.956 | 30.00th=[ 1549], 40.00th=[ 1991], 50.00th=[ 6783], 60.00th=[ 7832], 00:20:59.956 | 70.00th=[ 8848], 80.00th=[10552], 90.00th=[34866], 95.00th=[38536], 00:20:59.956 | 99.00th=[45351], 99.50th=[46400], 99.90th=[50594], 99.95th=[51643], 00:20:59.956 | 99.99th=[56886] 00:20:59.956 bw ( KiB/s): min=10152, max=63936, per=93.27%, avg=47652.27, stdev=14843.02, samples=11 00:20:59.956 iops : min= 2538, max=15984, avg=11913.00, stdev=3710.71, samples=11 00:20:59.956 lat (usec) : 750=0.12%, 1000=1.63% 00:20:59.956 lat (msec) : 2=18.35%, 4=0.93%, 10=17.82%, 20=50.01%, 50=11.08% 00:20:59.956 lat (msec) : 100=0.06% 00:20:59.956 cpu : usr=98.95%, sys=0.28%, ctx=32, majf=0, minf=5567 00:20:59.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:59.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.956 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.956 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.956 00:20:59.956 Run status group 0 (all jobs): 00:20:59.956 READ: bw=27.1MiB/s (28.5MB/s), 27.1MiB/s-27.1MiB/s (28.5MB/s-28.5MB/s), io=255MiB (267MB), run=9384-9384msec 00:20:59.956 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=256MiB (268MB), run=5131-5131msec 00:21:01.334 ----------------------------------------------------- 00:21:01.334 Suppressions used: 00:21:01.334 count bytes template 00:21:01.334 1 5 /usr/src/fio/parse.c 00:21:01.334 2 192 /usr/src/fio/iolog.c 00:21:01.334 1 8 libtcmalloc_minimal.so 00:21:01.334 1 904 libcrypto.so 00:21:01.334 ----------------------------------------------------- 00:21:01.334 00:21:01.334 19:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:01.334 19:42:52 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.334 19:42:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.593 Remove shared memory files 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62157 /dev/shm/spdk_tgt_trace.pid78894 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:01.593 ************************************ 00:21:01.593 END TEST ftl_fio_basic 00:21:01.593 ************************************ 00:21:01.593 00:21:01.593 real 1m12.276s 00:21:01.593 user 2m37.776s 00:21:01.593 sys 0m4.024s 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.593 19:42:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:01.593 19:42:52 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:01.593 19:42:52 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:01.593 19:42:52 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:01.593 19:42:52 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.593 19:42:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:01.593 ************************************ 00:21:01.593 START TEST ftl_bdevperf 00:21:01.593 ************************************ 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:01.593 * Looking for test storage... 00:21:01.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.593 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:01.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.851 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80898 00:21:01.851 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:01.851 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80898 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80898 ']' 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.852 19:42:52 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:01.852 [2024-07-15 19:42:52.498474] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:01.852 [2024-07-15 19:42:52.498660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80898 ] 00:21:02.110 [2024-07-15 19:42:52.688029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.367 [2024-07-15 19:42:52.972816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:02.634 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:03.201 19:42:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:03.458 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:03.458 { 00:21:03.458 "name": "nvme0n1", 00:21:03.458 "aliases": [ 00:21:03.458 "1a92bae5-dc38-4d2f-a977-8e178310fb9b" 00:21:03.458 ], 00:21:03.458 "product_name": "NVMe disk", 00:21:03.458 "block_size": 4096, 00:21:03.458 "num_blocks": 1310720, 00:21:03.458 "uuid": "1a92bae5-dc38-4d2f-a977-8e178310fb9b", 00:21:03.458 "assigned_rate_limits": { 00:21:03.458 "rw_ios_per_sec": 0, 00:21:03.458 "rw_mbytes_per_sec": 0, 00:21:03.458 "r_mbytes_per_sec": 0, 00:21:03.458 "w_mbytes_per_sec": 0 00:21:03.458 }, 00:21:03.458 "claimed": true, 00:21:03.458 "claim_type": "read_many_write_one", 00:21:03.458 "zoned": false, 00:21:03.458 "supported_io_types": { 00:21:03.458 "read": true, 00:21:03.458 "write": true, 00:21:03.458 "unmap": true, 00:21:03.458 "flush": true, 00:21:03.458 "reset": true, 00:21:03.458 "nvme_admin": true, 00:21:03.458 "nvme_io": true, 00:21:03.458 "nvme_io_md": false, 00:21:03.458 "write_zeroes": true, 00:21:03.458 "zcopy": false, 00:21:03.458 "get_zone_info": false, 00:21:03.458 "zone_management": false, 00:21:03.458 "zone_append": false, 00:21:03.458 "compare": true, 00:21:03.458 "compare_and_write": false, 00:21:03.458 "abort": true, 00:21:03.458 "seek_hole": false, 00:21:03.458 "seek_data": false, 00:21:03.458 "copy": true, 00:21:03.458 "nvme_iov_md": false 00:21:03.458 }, 00:21:03.458 "driver_specific": { 00:21:03.458 "nvme": [ 00:21:03.458 { 00:21:03.458 "pci_address": "0000:00:11.0", 00:21:03.458 "trid": { 00:21:03.458 "trtype": "PCIe", 00:21:03.458 "traddr": "0000:00:11.0" 00:21:03.458 }, 00:21:03.458 "ctrlr_data": { 00:21:03.458 "cntlid": 0, 00:21:03.458 "vendor_id": "0x1b36", 00:21:03.458 "model_number": "QEMU NVMe Ctrl", 00:21:03.458 "serial_number": "12341", 00:21:03.458 "firmware_revision": "8.0.0", 00:21:03.458 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:03.458 "oacs": { 00:21:03.458 "security": 0, 00:21:03.459 "format": 1, 00:21:03.459 "firmware": 0, 00:21:03.459 "ns_manage": 1 00:21:03.459 }, 00:21:03.459 "multi_ctrlr": false, 00:21:03.459 "ana_reporting": false 00:21:03.459 }, 00:21:03.459 "vs": { 00:21:03.459 "nvme_version": "1.4" 00:21:03.459 }, 00:21:03.459 "ns_data": { 00:21:03.459 "id": 1, 00:21:03.459 "can_share": false 00:21:03.459 } 00:21:03.459 } 00:21:03.459 ], 00:21:03.459 "mp_policy": "active_passive" 00:21:03.459 } 00:21:03.459 } 00:21:03.459 ]' 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.459 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:03.716 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=96b0f1e2-b936-4e16-81a4-3af81b1e6e6a 00:21:03.716 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:03.716 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96b0f1e2-b936-4e16-81a4-3af81b1e6e6a 00:21:03.973 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:03.973 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=33484c7e-15c8-46b4-a93b-6199b30102b7 00:21:03.973 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 33484c7e-15c8-46b4-a93b-6199b30102b7 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.229 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.230 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:04.230 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:04.230 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:04.230 19:42:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:04.488 { 00:21:04.488 "name": "44f9b87f-2dca-46e4-bf1a-19779ff69839", 00:21:04.488 "aliases": [ 00:21:04.488 "lvs/nvme0n1p0" 00:21:04.488 ], 00:21:04.488 "product_name": "Logical Volume", 00:21:04.488 "block_size": 4096, 00:21:04.488 "num_blocks": 26476544, 00:21:04.488 "uuid": "44f9b87f-2dca-46e4-bf1a-19779ff69839", 00:21:04.488 "assigned_rate_limits": { 00:21:04.488 "rw_ios_per_sec": 0, 00:21:04.488 "rw_mbytes_per_sec": 0, 00:21:04.488 "r_mbytes_per_sec": 0, 00:21:04.488 "w_mbytes_per_sec": 0 00:21:04.488 }, 00:21:04.488 "claimed": false, 00:21:04.488 "zoned": false, 00:21:04.488 "supported_io_types": { 00:21:04.488 "read": true, 00:21:04.488 "write": true, 00:21:04.488 "unmap": true, 00:21:04.488 "flush": false, 00:21:04.488 "reset": true, 00:21:04.488 "nvme_admin": false, 00:21:04.488 "nvme_io": false, 00:21:04.488 "nvme_io_md": false, 00:21:04.488 "write_zeroes": true, 00:21:04.488 "zcopy": false, 00:21:04.488 "get_zone_info": false, 00:21:04.488 "zone_management": false, 00:21:04.488 "zone_append": false, 00:21:04.488 "compare": false, 00:21:04.488 "compare_and_write": false, 00:21:04.488 "abort": false, 00:21:04.488 "seek_hole": true, 00:21:04.488 "seek_data": true, 00:21:04.488 "copy": false, 00:21:04.488 "nvme_iov_md": false 00:21:04.488 }, 00:21:04.488 "driver_specific": { 00:21:04.488 "lvol": { 00:21:04.488 "lvol_store_uuid": "33484c7e-15c8-46b4-a93b-6199b30102b7", 00:21:04.488 "base_bdev": "nvme0n1", 00:21:04.488 "thin_provision": true, 00:21:04.488 "num_allocated_clusters": 0, 00:21:04.488 "snapshot": false, 00:21:04.488 "clone": false, 00:21:04.488 "esnap_clone": false 00:21:04.488 } 00:21:04.488 } 00:21:04.488 } 00:21:04.488 ]' 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:04.488 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:04.758 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:04.758 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:04.758 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.758 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:04.758 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:04.759 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:04.759 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:04.759 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:05.017 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:05.017 { 00:21:05.017 "name": "44f9b87f-2dca-46e4-bf1a-19779ff69839", 00:21:05.017 "aliases": [ 00:21:05.017 "lvs/nvme0n1p0" 00:21:05.017 ], 00:21:05.017 "product_name": "Logical Volume", 00:21:05.017 "block_size": 4096, 00:21:05.017 "num_blocks": 26476544, 00:21:05.017 "uuid": "44f9b87f-2dca-46e4-bf1a-19779ff69839", 00:21:05.017 "assigned_rate_limits": { 00:21:05.017 "rw_ios_per_sec": 0, 00:21:05.017 "rw_mbytes_per_sec": 0, 00:21:05.017 "r_mbytes_per_sec": 0, 00:21:05.017 "w_mbytes_per_sec": 0 00:21:05.017 }, 00:21:05.017 "claimed": false, 00:21:05.017 "zoned": false, 00:21:05.017 "supported_io_types": { 00:21:05.017 "read": true, 00:21:05.017 "write": true, 00:21:05.017 "unmap": true, 00:21:05.017 "flush": false, 00:21:05.017 "reset": true, 00:21:05.017 "nvme_admin": false, 00:21:05.017 "nvme_io": false, 00:21:05.017 "nvme_io_md": false, 00:21:05.017 "write_zeroes": true, 00:21:05.017 "zcopy": false, 00:21:05.017 "get_zone_info": false, 00:21:05.017 "zone_management": false, 00:21:05.017 "zone_append": false, 00:21:05.017 "compare": false, 00:21:05.017 "compare_and_write": false, 00:21:05.017 "abort": false, 00:21:05.017 "seek_hole": true, 00:21:05.017 "seek_data": true, 00:21:05.017 "copy": false, 00:21:05.017 "nvme_iov_md": false 00:21:05.017 }, 00:21:05.017 "driver_specific": { 00:21:05.017 "lvol": { 00:21:05.017 "lvol_store_uuid": "33484c7e-15c8-46b4-a93b-6199b30102b7", 00:21:05.017 "base_bdev": "nvme0n1", 00:21:05.017 "thin_provision": true, 00:21:05.017 "num_allocated_clusters": 0, 00:21:05.017 "snapshot": false, 00:21:05.017 "clone": false, 00:21:05.017 "esnap_clone": false 00:21:05.017 } 00:21:05.017 } 00:21:05.017 } 00:21:05.017 ]' 00:21:05.017 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:05.409 19:42:55 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:05.409 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44f9b87f-2dca-46e4-bf1a-19779ff69839 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:05.668 { 00:21:05.668 "name": "44f9b87f-2dca-46e4-bf1a-19779ff69839", 00:21:05.668 "aliases": [ 00:21:05.668 "lvs/nvme0n1p0" 00:21:05.668 ], 00:21:05.668 "product_name": "Logical Volume", 00:21:05.668 "block_size": 4096, 00:21:05.668 "num_blocks": 26476544, 00:21:05.668 "uuid": "44f9b87f-2dca-46e4-bf1a-19779ff69839", 00:21:05.668 "assigned_rate_limits": { 00:21:05.668 "rw_ios_per_sec": 0, 00:21:05.668 "rw_mbytes_per_sec": 0, 00:21:05.668 "r_mbytes_per_sec": 0, 00:21:05.668 "w_mbytes_per_sec": 0 00:21:05.668 }, 00:21:05.668 "claimed": false, 00:21:05.668 "zoned": false, 00:21:05.668 "supported_io_types": { 00:21:05.668 "read": true, 00:21:05.668 "write": true, 00:21:05.668 "unmap": true, 00:21:05.668 "flush": false, 00:21:05.668 "reset": true, 00:21:05.668 "nvme_admin": false, 00:21:05.668 "nvme_io": false, 00:21:05.668 "nvme_io_md": false, 00:21:05.668 "write_zeroes": true, 00:21:05.668 "zcopy": false, 00:21:05.668 "get_zone_info": false, 00:21:05.668 "zone_management": false, 00:21:05.668 "zone_append": false, 00:21:05.668 "compare": false, 00:21:05.668 "compare_and_write": false, 00:21:05.668 "abort": false, 00:21:05.668 "seek_hole": true, 00:21:05.668 "seek_data": true, 00:21:05.668 "copy": false, 00:21:05.668 "nvme_iov_md": false 00:21:05.668 }, 00:21:05.668 "driver_specific": { 00:21:05.668 "lvol": { 00:21:05.668 "lvol_store_uuid": "33484c7e-15c8-46b4-a93b-6199b30102b7", 00:21:05.668 "base_bdev": "nvme0n1", 00:21:05.668 "thin_provision": true, 00:21:05.668 "num_allocated_clusters": 0, 00:21:05.668 "snapshot": false, 00:21:05.668 "clone": false, 00:21:05.668 "esnap_clone": false 00:21:05.668 } 00:21:05.668 } 00:21:05.668 } 00:21:05.668 ]' 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:21:05.668 19:42:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 44f9b87f-2dca-46e4-bf1a-19779ff69839 -c nvc0n1p0 --l2p_dram_limit 20 00:21:05.927 [2024-07-15 19:42:56.647608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.647667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.927 [2024-07-15 19:42:56.647686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.927 [2024-07-15 19:42:56.647697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.647759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.647771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.927 [2024-07-15 19:42:56.647802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:05.927 [2024-07-15 19:42:56.647815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.647837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.927 [2024-07-15 19:42:56.649023] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.927 [2024-07-15 19:42:56.649058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.649072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.927 [2024-07-15 19:42:56.649086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:21:05.927 [2024-07-15 19:42:56.649096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.649174] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4517ccd9-d10d-4e63-8718-31e660d2995c 00:21:05.927 [2024-07-15 19:42:56.650591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.650630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:05.927 [2024-07-15 19:42:56.650643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:05.927 [2024-07-15 19:42:56.650659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.658150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.658187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.927 [2024-07-15 19:42:56.658216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.441 ms 00:21:05.927 [2024-07-15 19:42:56.658229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.658332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.658352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.927 [2024-07-15 19:42:56.658376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:05.927 [2024-07-15 19:42:56.658392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.658454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.658468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.927 [2024-07-15 19:42:56.658479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:05.927 [2024-07-15 19:42:56.658492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.658515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.927 [2024-07-15 19:42:56.664503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.664538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.927 [2024-07-15 19:42:56.664555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.992 ms 00:21:05.927 [2024-07-15 19:42:56.664565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.664603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.664617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.927 [2024-07-15 19:42:56.664629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:05.927 [2024-07-15 19:42:56.664639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.664684] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:05.927 [2024-07-15 19:42:56.664834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:05.927 [2024-07-15 19:42:56.664858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.927 [2024-07-15 19:42:56.664871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:05.927 [2024-07-15 19:42:56.664887] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.927 [2024-07-15 19:42:56.664899] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.927 [2024-07-15 19:42:56.664913] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:05.927 [2024-07-15 19:42:56.664923] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.927 [2024-07-15 19:42:56.664937] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:05.927 [2024-07-15 19:42:56.664947] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:05.927 [2024-07-15 19:42:56.664960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.664970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.927 [2024-07-15 19:42:56.664983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:05.927 [2024-07-15 19:42:56.664996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.665069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.927 [2024-07-15 19:42:56.665080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.927 [2024-07-15 19:42:56.665093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:05.927 [2024-07-15 19:42:56.665103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.927 [2024-07-15 19:42:56.665185] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.927 [2024-07-15 19:42:56.665197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.927 [2024-07-15 19:42:56.665210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.927 [2024-07-15 19:42:56.665220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.927 [2024-07-15 19:42:56.665245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:05.927 [2024-07-15 19:42:56.665275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.927 [2024-07-15 19:42:56.665287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.927 [2024-07-15 19:42:56.665311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.927 [2024-07-15 19:42:56.665321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:05.927 [2024-07-15 19:42:56.665332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.927 [2024-07-15 19:42:56.665342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.927 [2024-07-15 19:42:56.665355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:05.927 [2024-07-15 19:42:56.665364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.927 [2024-07-15 19:42:56.665388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:05.927 [2024-07-15 19:42:56.665411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.927 [2024-07-15 19:42:56.665433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.927 [2024-07-15 19:42:56.665455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.927 [2024-07-15 19:42:56.665464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.927 [2024-07-15 19:42:56.665485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.927 [2024-07-15 19:42:56.665497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:05.927 [2024-07-15 19:42:56.665506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.928 [2024-07-15 19:42:56.665517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.928 [2024-07-15 19:42:56.665527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:05.928 [2024-07-15 19:42:56.665539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.928 [2024-07-15 19:42:56.665548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.928 [2024-07-15 19:42:56.665562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:05.928 [2024-07-15 19:42:56.665571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.928 [2024-07-15 19:42:56.665583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.928 [2024-07-15 19:42:56.665592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:05.928 [2024-07-15 19:42:56.665604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.928 [2024-07-15 19:42:56.665613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:05.928 [2024-07-15 19:42:56.665625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:05.928 [2024-07-15 19:42:56.665634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.928 [2024-07-15 19:42:56.665646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:05.928 [2024-07-15 19:42:56.665655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:05.928 [2024-07-15 19:42:56.665667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.928 [2024-07-15 19:42:56.665676] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.928 [2024-07-15 19:42:56.665688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.928 [2024-07-15 19:42:56.665698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.928 [2024-07-15 19:42:56.665710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.928 [2024-07-15 19:42:56.665720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.928 [2024-07-15 19:42:56.665734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.928 [2024-07-15 19:42:56.665743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.928 [2024-07-15 19:42:56.665754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.928 [2024-07-15 19:42:56.665763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.928 [2024-07-15 19:42:56.665775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.928 [2024-07-15 19:42:56.665800] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.928 [2024-07-15 19:42:56.665815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.665830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:05.928 [2024-07-15 19:42:56.665844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:05.928 [2024-07-15 19:42:56.665854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:05.928 [2024-07-15 19:42:56.665868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:05.928 [2024-07-15 19:42:56.665878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:05.928 [2024-07-15 19:42:56.665892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:05.928 [2024-07-15 19:42:56.665902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:05.928 [2024-07-15 19:42:56.665915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:05.928 [2024-07-15 19:42:56.665926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:05.928 [2024-07-15 19:42:56.665943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.665953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.665965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.665976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.665988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:05.928 [2024-07-15 19:42:56.665999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.928 [2024-07-15 19:42:56.666012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.666023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.928 [2024-07-15 19:42:56.666036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.928 [2024-07-15 19:42:56.666047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.928 [2024-07-15 19:42:56.666059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.928 [2024-07-15 19:42:56.666070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-07-15 19:42:56.666083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.928 [2024-07-15 19:42:56.666096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:21:05.928 [2024-07-15 19:42:56.666108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-07-15 19:42:56.666145] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:05.928 [2024-07-15 19:42:56.666162] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:08.455 [2024-07-15 19:42:59.105873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.105948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:08.455 [2024-07-15 19:42:59.105967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2439.711 ms 00:21:08.455 [2024-07-15 19:42:59.105984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.161889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.161956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:08.455 [2024-07-15 19:42:59.161981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.651 ms 00:21:08.455 [2024-07-15 19:42:59.161998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.162170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.162190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:08.455 [2024-07-15 19:42:59.162211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:08.455 [2024-07-15 19:42:59.162229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.215706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.215774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:08.455 [2024-07-15 19:42:59.215803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.427 ms 00:21:08.455 [2024-07-15 19:42:59.215816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.215865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.215885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:08.455 [2024-07-15 19:42:59.215897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:08.455 [2024-07-15 19:42:59.215909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.216412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.216439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:08.455 [2024-07-15 19:42:59.216451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:08.455 [2024-07-15 19:42:59.216465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.216580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.216597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:08.455 [2024-07-15 19:42:59.216609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:08.455 [2024-07-15 19:42:59.216626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.455 [2024-07-15 19:42:59.238519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.455 [2024-07-15 19:42:59.238582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.455 [2024-07-15 19:42:59.238598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.867 ms 00:21:08.455 [2024-07-15 19:42:59.238613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.712 [2024-07-15 19:42:59.254278] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:08.712 [2024-07-15 19:42:59.260358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.712 [2024-07-15 19:42:59.260414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.712 [2024-07-15 19:42:59.260431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.626 ms 00:21:08.712 [2024-07-15 19:42:59.260442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.712 [2024-07-15 19:42:59.334685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.712 [2024-07-15 19:42:59.334771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:08.712 [2024-07-15 19:42:59.334805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.186 ms 00:21:08.712 [2024-07-15 19:42:59.334817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.712 [2024-07-15 19:42:59.335039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.712 [2024-07-15 19:42:59.335054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.712 [2024-07-15 19:42:59.335071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:21:08.712 [2024-07-15 19:42:59.335081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.712 [2024-07-15 19:42:59.378631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.712 [2024-07-15 19:42:59.378704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:08.712 [2024-07-15 19:42:59.378724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.469 ms 00:21:08.712 [2024-07-15 19:42:59.378735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.712 [2024-07-15 19:42:59.421401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.712 [2024-07-15 19:42:59.421472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:08.712 [2024-07-15 19:42:59.421492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.569 ms 00:21:08.712 [2024-07-15 19:42:59.421502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.712 [2024-07-15 19:42:59.422402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.712 [2024-07-15 19:42:59.422435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:08.712 [2024-07-15 19:42:59.422450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:21:08.712 [2024-07-15 19:42:59.422461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.535987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.969 [2024-07-15 19:42:59.536053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:08.969 [2024-07-15 19:42:59.536197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.437 ms 00:21:08.969 [2024-07-15 19:42:59.536209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.577606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.969 [2024-07-15 19:42:59.577657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:08.969 [2024-07-15 19:42:59.577675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.345 ms 00:21:08.969 [2024-07-15 19:42:59.577686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.619456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.969 [2024-07-15 19:42:59.619510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:08.969 [2024-07-15 19:42:59.619528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.718 ms 00:21:08.969 [2024-07-15 19:42:59.619538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.660316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.969 [2024-07-15 19:42:59.660369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.969 [2024-07-15 19:42:59.660403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.729 ms 00:21:08.969 [2024-07-15 19:42:59.660414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.660465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.969 [2024-07-15 19:42:59.660477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.969 [2024-07-15 19:42:59.660496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:08.969 [2024-07-15 19:42:59.660506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.660617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.969 [2024-07-15 19:42:59.660629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.969 [2024-07-15 19:42:59.660642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:08.969 [2024-07-15 19:42:59.660652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.969 [2024-07-15 19:42:59.661677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3013.608 ms, result 0 00:21:08.969 { 00:21:08.969 "name": "ftl0", 00:21:08.969 "uuid": "4517ccd9-d10d-4e63-8718-31e660d2995c" 00:21:08.969 } 00:21:08.969 19:42:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:08.969 19:42:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:21:08.969 19:42:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:21:09.226 19:42:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:09.483 [2024-07-15 19:43:00.054598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:09.483 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:09.483 Zero copy mechanism will not be used. 00:21:09.483 Running I/O for 4 seconds... 00:21:13.728 00:21:13.728 Latency(us) 00:21:13.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.728 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:13.728 ftl0 : 4.00 2034.09 135.08 0.00 0.00 517.37 205.78 2699.46 00:21:13.728 =================================================================================================================== 00:21:13.728 Total : 2034.09 135.08 0.00 0.00 517.37 205.78 2699.46 00:21:13.728 [2024-07-15 19:43:04.065707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:13.728 0 00:21:13.728 19:43:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:13.728 [2024-07-15 19:43:04.200683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:13.728 Running I/O for 4 seconds... 00:21:17.911 00:21:17.911 Latency(us) 00:21:17.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.911 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:17.911 ftl0 : 4.01 9374.94 36.62 0.00 0.00 13625.00 286.72 34952.53 00:21:17.911 =================================================================================================================== 00:21:17.911 Total : 9374.94 36.62 0.00 0.00 13625.00 0.00 34952.53 00:21:17.911 [2024-07-15 19:43:08.226043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:17.911 0 00:21:17.911 19:43:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:17.911 [2024-07-15 19:43:08.367768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:17.911 Running I/O for 4 seconds... 00:21:22.098 00:21:22.098 Latency(us) 00:21:22.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.098 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:22.098 Verification LBA range: start 0x0 length 0x1400000 00:21:22.098 ftl0 : 4.01 8039.86 31.41 0.00 0.00 15871.70 282.82 19348.72 00:21:22.098 =================================================================================================================== 00:21:22.098 Total : 8039.86 31.41 0.00 0.00 15871.70 0.00 19348.72 00:21:22.098 [2024-07-15 19:43:12.397935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:21:22.098 l0 00:21:22.098 19:43:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:22.098 [2024-07-15 19:43:12.678572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-07-15 19:43:12.679335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.098 [2024-07-15 19:43:12.679471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.098 [2024-07-15 19:43:12.679514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-07-15 19:43:12.679589] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.098 [2024-07-15 19:43:12.684180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-07-15 19:43:12.684324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.098 [2024-07-15 19:43:12.684429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:21:22.098 [2024-07-15 19:43:12.684475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-07-15 19:43:12.686257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-07-15 19:43:12.686424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.098 [2024-07-15 19:43:12.686578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:21:22.098 [2024-07-15 19:43:12.686628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-07-15 19:43:12.857629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-07-15 19:43:12.857938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.098 [2024-07-15 19:43:12.858065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.940 ms 00:21:22.098 [2024-07-15 19:43:12.858179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-07-15 19:43:12.863665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-07-15 19:43:12.863861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.098 [2024-07-15 19:43:12.863883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.410 ms 00:21:22.098 [2024-07-15 19:43:12.863897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:12.904039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:12.904091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.357 [2024-07-15 19:43:12.904107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.054 ms 00:21:22.357 [2024-07-15 19:43:12.904120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:12.927964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:12.928036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.357 [2024-07-15 19:43:12.928054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.799 ms 00:21:22.357 [2024-07-15 19:43:12.928071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:12.928259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:12.928277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.357 [2024-07-15 19:43:12.928288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:22.357 [2024-07-15 19:43:12.928305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:12.969081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:12.969159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:22.357 [2024-07-15 19:43:12.969176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.755 ms 00:21:22.357 [2024-07-15 19:43:12.969188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:13.009205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:13.009268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:22.357 [2024-07-15 19:43:13.009300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.965 ms 00:21:22.357 [2024-07-15 19:43:13.009313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:13.048417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:13.048491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.357 [2024-07-15 19:43:13.048509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.052 ms 00:21:22.357 [2024-07-15 19:43:13.048522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:13.090825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.357 [2024-07-15 19:43:13.090899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.357 [2024-07-15 19:43:13.090916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.165 ms 00:21:22.357 [2024-07-15 19:43:13.090933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.357 [2024-07-15 19:43:13.091009] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.357 [2024-07-15 19:43:13.091032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.357 [2024-07-15 19:43:13.091723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.091992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.358 [2024-07-15 19:43:13.092334] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.358 [2024-07-15 19:43:13.092344] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4517ccd9-d10d-4e63-8718-31e660d2995c 00:21:22.358 [2024-07-15 19:43:13.092358] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:22.358 [2024-07-15 19:43:13.092368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:22.358 [2024-07-15 19:43:13.092379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:22.358 [2024-07-15 19:43:13.092390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:22.358 [2024-07-15 19:43:13.092406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.358 [2024-07-15 19:43:13.092416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.358 [2024-07-15 19:43:13.092429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.358 [2024-07-15 19:43:13.092438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.358 [2024-07-15 19:43:13.092452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.358 [2024-07-15 19:43:13.092463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.358 [2024-07-15 19:43:13.092476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.358 [2024-07-15 19:43:13.092487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:21:22.358 [2024-07-15 19:43:13.092499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.358 [2024-07-15 19:43:13.112455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.358 [2024-07-15 19:43:13.112521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.358 [2024-07-15 19:43:13.112540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.881 ms 00:21:22.358 [2024-07-15 19:43:13.112553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.358 [2024-07-15 19:43:13.113117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.358 [2024-07-15 19:43:13.113132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.358 [2024-07-15 19:43:13.113143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:21:22.358 [2024-07-15 19:43:13.113155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.165245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.165320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.616 [2024-07-15 19:43:13.165337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.165353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.165433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.165447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.616 [2024-07-15 19:43:13.165458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.165471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.165575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.165593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.616 [2024-07-15 19:43:13.165604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.165621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.165639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.165653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.616 [2024-07-15 19:43:13.165663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.165675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.294038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.294108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.616 [2024-07-15 19:43:13.294127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.294143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.399808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.399863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.616 [2024-07-15 19:43:13.399878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.399891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.399992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.400008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.616 [2024-07-15 19:43:13.400020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.400032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.400083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.400097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.616 [2024-07-15 19:43:13.400108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.400121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.400238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.400255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.616 [2024-07-15 19:43:13.400266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.400282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.400320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.400336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.616 [2024-07-15 19:43:13.400346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.400359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.400398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.400411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.616 [2024-07-15 19:43:13.400422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.400434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.400483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.616 [2024-07-15 19:43:13.400497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.616 [2024-07-15 19:43:13.400508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.616 [2024-07-15 19:43:13.400520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.616 [2024-07-15 19:43:13.400645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 722.045 ms, result 0 00:21:22.616 true 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80898 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80898 ']' 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80898 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80898 00:21:22.874 killing process with pid 80898 00:21:22.874 Received shutdown signal, test time was about 4.000000 seconds 00:21:22.874 00:21:22.874 Latency(us) 00:21:22.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.874 =================================================================================================================== 00:21:22.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80898' 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80898 00:21:22.874 19:43:13 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80898 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:28.148 Remove shared memory files 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:28.148 ************************************ 00:21:28.148 END TEST ftl_bdevperf 00:21:28.148 ************************************ 00:21:28.148 00:21:28.148 real 0m25.960s 00:21:28.148 user 0m28.852s 00:21:28.148 sys 0m1.389s 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.148 19:43:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:28.148 19:43:18 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:28.148 19:43:18 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:28.148 19:43:18 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:28.148 19:43:18 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.148 19:43:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:28.148 ************************************ 00:21:28.148 START TEST ftl_trim 00:21:28.148 ************************************ 00:21:28.148 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:28.148 * Looking for test storage... 00:21:28.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:28.148 19:43:18 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=81261 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 81261 00:21:28.149 19:43:18 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:28.149 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81261 ']' 00:21:28.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.149 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.149 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.149 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.149 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.149 19:43:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:28.149 [2024-07-15 19:43:18.539059] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:28.149 [2024-07-15 19:43:18.539235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81261 ] 00:21:28.149 [2024-07-15 19:43:18.728316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.406 [2024-07-15 19:43:19.010438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.406 [2024-07-15 19:43:19.010483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.406 [2024-07-15 19:43:19.010497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.339 19:43:19 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.339 19:43:19 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:29.339 19:43:19 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:29.339 19:43:19 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:29.339 19:43:19 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:29.339 19:43:19 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:29.339 19:43:19 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:29.339 19:43:19 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:29.597 19:43:20 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:29.597 19:43:20 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:29.597 19:43:20 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:29.597 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:29.597 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:29.597 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:29.597 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:29.597 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:29.855 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:29.855 { 00:21:29.855 "name": "nvme0n1", 00:21:29.855 "aliases": [ 00:21:29.856 "a02a33e1-240a-4691-b461-6fe3924bcf4a" 00:21:29.856 ], 00:21:29.856 "product_name": "NVMe disk", 00:21:29.856 "block_size": 4096, 00:21:29.856 "num_blocks": 1310720, 00:21:29.856 "uuid": "a02a33e1-240a-4691-b461-6fe3924bcf4a", 00:21:29.856 "assigned_rate_limits": { 00:21:29.856 "rw_ios_per_sec": 0, 00:21:29.856 "rw_mbytes_per_sec": 0, 00:21:29.856 "r_mbytes_per_sec": 0, 00:21:29.856 "w_mbytes_per_sec": 0 00:21:29.856 }, 00:21:29.856 "claimed": true, 00:21:29.856 "claim_type": "read_many_write_one", 00:21:29.856 "zoned": false, 00:21:29.856 "supported_io_types": { 00:21:29.856 "read": true, 00:21:29.856 "write": true, 00:21:29.856 "unmap": true, 00:21:29.856 "flush": true, 00:21:29.856 "reset": true, 00:21:29.856 "nvme_admin": true, 00:21:29.856 "nvme_io": true, 00:21:29.856 "nvme_io_md": false, 00:21:29.856 "write_zeroes": true, 00:21:29.856 "zcopy": false, 00:21:29.856 "get_zone_info": false, 00:21:29.856 "zone_management": false, 00:21:29.856 "zone_append": false, 00:21:29.856 "compare": true, 00:21:29.856 "compare_and_write": false, 00:21:29.856 "abort": true, 00:21:29.856 "seek_hole": false, 00:21:29.856 "seek_data": false, 00:21:29.856 "copy": true, 00:21:29.856 "nvme_iov_md": false 00:21:29.856 }, 00:21:29.856 "driver_specific": { 00:21:29.856 "nvme": [ 00:21:29.856 { 00:21:29.856 "pci_address": "0000:00:11.0", 00:21:29.856 "trid": { 00:21:29.856 "trtype": "PCIe", 00:21:29.856 "traddr": "0000:00:11.0" 00:21:29.856 }, 00:21:29.856 "ctrlr_data": { 00:21:29.856 "cntlid": 0, 00:21:29.856 "vendor_id": "0x1b36", 00:21:29.856 "model_number": "QEMU NVMe Ctrl", 00:21:29.856 "serial_number": "12341", 00:21:29.856 "firmware_revision": "8.0.0", 00:21:29.856 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:29.856 "oacs": { 00:21:29.856 "security": 0, 00:21:29.856 "format": 1, 00:21:29.856 "firmware": 0, 00:21:29.856 "ns_manage": 1 00:21:29.856 }, 00:21:29.856 "multi_ctrlr": false, 00:21:29.856 "ana_reporting": false 00:21:29.856 }, 00:21:29.856 "vs": { 00:21:29.856 "nvme_version": "1.4" 00:21:29.856 }, 00:21:29.856 "ns_data": { 00:21:29.856 "id": 1, 00:21:29.856 "can_share": false 00:21:29.856 } 00:21:29.856 } 00:21:29.856 ], 00:21:29.856 "mp_policy": "active_passive" 00:21:29.856 } 00:21:29.856 } 00:21:29.856 ]' 00:21:29.856 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:29.856 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:29.856 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:29.856 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:29.856 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:29.856 19:43:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:21:29.856 19:43:20 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:29.856 19:43:20 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:29.856 19:43:20 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:29.856 19:43:20 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:29.856 19:43:20 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:30.422 19:43:20 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=33484c7e-15c8-46b4-a93b-6199b30102b7 00:21:30.422 19:43:20 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:30.422 19:43:20 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33484c7e-15c8-46b4-a93b-6199b30102b7 00:21:30.422 19:43:21 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:30.680 19:43:21 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e315323a-c5a8-4b52-ba26-cdacd06e1923 00:21:30.680 19:43:21 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e315323a-c5a8-4b52-ba26-cdacd06e1923 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=94011795-adfd-48bf-8951-de7a76f29ffa 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=94011795-adfd-48bf-8951-de7a76f29ffa 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:30.938 19:43:21 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:30.938 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=94011795-adfd-48bf-8951-de7a76f29ffa 00:21:30.938 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:30.938 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:30.938 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:30.938 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:31.196 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:31.197 { 00:21:31.197 "name": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:31.197 "aliases": [ 00:21:31.197 "lvs/nvme0n1p0" 00:21:31.197 ], 00:21:31.197 "product_name": "Logical Volume", 00:21:31.197 "block_size": 4096, 00:21:31.197 "num_blocks": 26476544, 00:21:31.197 "uuid": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:31.197 "assigned_rate_limits": { 00:21:31.197 "rw_ios_per_sec": 0, 00:21:31.197 "rw_mbytes_per_sec": 0, 00:21:31.197 "r_mbytes_per_sec": 0, 00:21:31.197 "w_mbytes_per_sec": 0 00:21:31.197 }, 00:21:31.197 "claimed": false, 00:21:31.197 "zoned": false, 00:21:31.197 "supported_io_types": { 00:21:31.197 "read": true, 00:21:31.197 "write": true, 00:21:31.197 "unmap": true, 00:21:31.197 "flush": false, 00:21:31.197 "reset": true, 00:21:31.197 "nvme_admin": false, 00:21:31.197 "nvme_io": false, 00:21:31.197 "nvme_io_md": false, 00:21:31.197 "write_zeroes": true, 00:21:31.197 "zcopy": false, 00:21:31.197 "get_zone_info": false, 00:21:31.197 "zone_management": false, 00:21:31.197 "zone_append": false, 00:21:31.197 "compare": false, 00:21:31.197 "compare_and_write": false, 00:21:31.197 "abort": false, 00:21:31.197 "seek_hole": true, 00:21:31.197 "seek_data": true, 00:21:31.197 "copy": false, 00:21:31.197 "nvme_iov_md": false 00:21:31.197 }, 00:21:31.197 "driver_specific": { 00:21:31.197 "lvol": { 00:21:31.197 "lvol_store_uuid": "e315323a-c5a8-4b52-ba26-cdacd06e1923", 00:21:31.197 "base_bdev": "nvme0n1", 00:21:31.197 "thin_provision": true, 00:21:31.197 "num_allocated_clusters": 0, 00:21:31.197 "snapshot": false, 00:21:31.197 "clone": false, 00:21:31.197 "esnap_clone": false 00:21:31.197 } 00:21:31.197 } 00:21:31.197 } 00:21:31.197 ]' 00:21:31.197 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:31.197 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:31.197 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:31.197 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:31.197 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:31.197 19:43:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:31.197 19:43:21 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:31.197 19:43:21 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:31.197 19:43:21 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:31.455 19:43:22 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:31.455 19:43:22 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:31.455 19:43:22 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:31.455 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=94011795-adfd-48bf-8951-de7a76f29ffa 00:21:31.455 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:31.455 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:31.455 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:31.455 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:31.713 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:31.713 { 00:21:31.713 "name": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:31.713 "aliases": [ 00:21:31.713 "lvs/nvme0n1p0" 00:21:31.713 ], 00:21:31.713 "product_name": "Logical Volume", 00:21:31.713 "block_size": 4096, 00:21:31.713 "num_blocks": 26476544, 00:21:31.713 "uuid": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:31.713 "assigned_rate_limits": { 00:21:31.713 "rw_ios_per_sec": 0, 00:21:31.713 "rw_mbytes_per_sec": 0, 00:21:31.713 "r_mbytes_per_sec": 0, 00:21:31.713 "w_mbytes_per_sec": 0 00:21:31.713 }, 00:21:31.713 "claimed": false, 00:21:31.713 "zoned": false, 00:21:31.713 "supported_io_types": { 00:21:31.713 "read": true, 00:21:31.713 "write": true, 00:21:31.713 "unmap": true, 00:21:31.713 "flush": false, 00:21:31.713 "reset": true, 00:21:31.713 "nvme_admin": false, 00:21:31.713 "nvme_io": false, 00:21:31.713 "nvme_io_md": false, 00:21:31.713 "write_zeroes": true, 00:21:31.713 "zcopy": false, 00:21:31.713 "get_zone_info": false, 00:21:31.713 "zone_management": false, 00:21:31.713 "zone_append": false, 00:21:31.713 "compare": false, 00:21:31.713 "compare_and_write": false, 00:21:31.713 "abort": false, 00:21:31.713 "seek_hole": true, 00:21:31.713 "seek_data": true, 00:21:31.713 "copy": false, 00:21:31.713 "nvme_iov_md": false 00:21:31.713 }, 00:21:31.713 "driver_specific": { 00:21:31.713 "lvol": { 00:21:31.713 "lvol_store_uuid": "e315323a-c5a8-4b52-ba26-cdacd06e1923", 00:21:31.713 "base_bdev": "nvme0n1", 00:21:31.713 "thin_provision": true, 00:21:31.713 "num_allocated_clusters": 0, 00:21:31.713 "snapshot": false, 00:21:31.713 "clone": false, 00:21:31.713 "esnap_clone": false 00:21:31.713 } 00:21:31.713 } 00:21:31.713 } 00:21:31.713 ]' 00:21:31.713 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:31.713 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:31.713 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:31.971 19:43:22 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:31.971 19:43:22 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:31.971 19:43:22 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:31.971 19:43:22 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:31.971 19:43:22 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=94011795-adfd-48bf-8951-de7a76f29ffa 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:31.971 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 94011795-adfd-48bf-8951-de7a76f29ffa 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:32.230 { 00:21:32.230 "name": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:32.230 "aliases": [ 00:21:32.230 "lvs/nvme0n1p0" 00:21:32.230 ], 00:21:32.230 "product_name": "Logical Volume", 00:21:32.230 "block_size": 4096, 00:21:32.230 "num_blocks": 26476544, 00:21:32.230 "uuid": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:32.230 "assigned_rate_limits": { 00:21:32.230 "rw_ios_per_sec": 0, 00:21:32.230 "rw_mbytes_per_sec": 0, 00:21:32.230 "r_mbytes_per_sec": 0, 00:21:32.230 "w_mbytes_per_sec": 0 00:21:32.230 }, 00:21:32.230 "claimed": false, 00:21:32.230 "zoned": false, 00:21:32.230 "supported_io_types": { 00:21:32.230 "read": true, 00:21:32.230 "write": true, 00:21:32.230 "unmap": true, 00:21:32.230 "flush": false, 00:21:32.230 "reset": true, 00:21:32.230 "nvme_admin": false, 00:21:32.230 "nvme_io": false, 00:21:32.230 "nvme_io_md": false, 00:21:32.230 "write_zeroes": true, 00:21:32.230 "zcopy": false, 00:21:32.230 "get_zone_info": false, 00:21:32.230 "zone_management": false, 00:21:32.230 "zone_append": false, 00:21:32.230 "compare": false, 00:21:32.230 "compare_and_write": false, 00:21:32.230 "abort": false, 00:21:32.230 "seek_hole": true, 00:21:32.230 "seek_data": true, 00:21:32.230 "copy": false, 00:21:32.230 "nvme_iov_md": false 00:21:32.230 }, 00:21:32.230 "driver_specific": { 00:21:32.230 "lvol": { 00:21:32.230 "lvol_store_uuid": "e315323a-c5a8-4b52-ba26-cdacd06e1923", 00:21:32.230 "base_bdev": "nvme0n1", 00:21:32.230 "thin_provision": true, 00:21:32.230 "num_allocated_clusters": 0, 00:21:32.230 "snapshot": false, 00:21:32.230 "clone": false, 00:21:32.230 "esnap_clone": false 00:21:32.230 } 00:21:32.230 } 00:21:32.230 } 00:21:32.230 ]' 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:32.230 19:43:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:32.230 19:43:22 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:32.230 19:43:22 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 94011795-adfd-48bf-8951-de7a76f29ffa -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:32.491 [2024-07-15 19:43:23.197963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.198018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:32.491 [2024-07-15 19:43:23.198034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:32.491 [2024-07-15 19:43:23.198048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.203166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.203211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.491 [2024-07-15 19:43:23.203225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.083 ms 00:21:32.491 [2024-07-15 19:43:23.203238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.203386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:32.491 [2024-07-15 19:43:23.204569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:32.491 [2024-07-15 19:43:23.204600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.204618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.491 [2024-07-15 19:43:23.204629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:21:32.491 [2024-07-15 19:43:23.204641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.204752] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:21:32.491 [2024-07-15 19:43:23.206184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.206216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:32.491 [2024-07-15 19:43:23.206231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:32.491 [2024-07-15 19:43:23.206241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.213799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.213831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.491 [2024-07-15 19:43:23.213846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.463 ms 00:21:32.491 [2024-07-15 19:43:23.213857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.214023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.214039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.491 [2024-07-15 19:43:23.214054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:32.491 [2024-07-15 19:43:23.214064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.214116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.214127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:32.491 [2024-07-15 19:43:23.214143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:32.491 [2024-07-15 19:43:23.214153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.214195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:32.491 [2024-07-15 19:43:23.220313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.220351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.491 [2024-07-15 19:43:23.220363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:21:32.491 [2024-07-15 19:43:23.220376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.220441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.220456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:32.491 [2024-07-15 19:43:23.220467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:32.491 [2024-07-15 19:43:23.220479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.220510] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:32.491 [2024-07-15 19:43:23.220642] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:32.491 [2024-07-15 19:43:23.220656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:32.491 [2024-07-15 19:43:23.220674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:32.491 [2024-07-15 19:43:23.220687] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:32.491 [2024-07-15 19:43:23.220702] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:32.491 [2024-07-15 19:43:23.220713] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:32.491 [2024-07-15 19:43:23.220725] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:32.491 [2024-07-15 19:43:23.220740] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:32.491 [2024-07-15 19:43:23.220770] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:32.491 [2024-07-15 19:43:23.220800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.220813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:32.491 [2024-07-15 19:43:23.220823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:21:32.491 [2024-07-15 19:43:23.220836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.220926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.491 [2024-07-15 19:43:23.220939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:32.491 [2024-07-15 19:43:23.220949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:32.491 [2024-07-15 19:43:23.220961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.491 [2024-07-15 19:43:23.221073] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:32.491 [2024-07-15 19:43:23.221090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:32.491 [2024-07-15 19:43:23.221101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:32.491 [2024-07-15 19:43:23.221135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:32.491 [2024-07-15 19:43:23.221166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.491 [2024-07-15 19:43:23.221187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:32.491 [2024-07-15 19:43:23.221199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:32.491 [2024-07-15 19:43:23.221208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.491 [2024-07-15 19:43:23.221221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:32.491 [2024-07-15 19:43:23.221231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:32.491 [2024-07-15 19:43:23.221242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:32.491 [2024-07-15 19:43:23.221265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:32.491 [2024-07-15 19:43:23.221314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:32.491 [2024-07-15 19:43:23.221348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:32.491 [2024-07-15 19:43:23.221377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:32.491 [2024-07-15 19:43:23.221410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.491 [2024-07-15 19:43:23.221430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:32.491 [2024-07-15 19:43:23.221439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.491 [2024-07-15 19:43:23.221463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:32.491 [2024-07-15 19:43:23.221474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:32.491 [2024-07-15 19:43:23.221483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.491 [2024-07-15 19:43:23.221495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:32.491 [2024-07-15 19:43:23.221504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:32.491 [2024-07-15 19:43:23.221517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:32.491 [2024-07-15 19:43:23.221539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:32.491 [2024-07-15 19:43:23.221548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.491 [2024-07-15 19:43:23.221559] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:32.492 [2024-07-15 19:43:23.221569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:32.492 [2024-07-15 19:43:23.221581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.492 [2024-07-15 19:43:23.221590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.492 [2024-07-15 19:43:23.221603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:32.492 [2024-07-15 19:43:23.221612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:32.492 [2024-07-15 19:43:23.221626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:32.492 [2024-07-15 19:43:23.221635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:32.492 [2024-07-15 19:43:23.221647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:32.492 [2024-07-15 19:43:23.221656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:32.492 [2024-07-15 19:43:23.221672] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:32.492 [2024-07-15 19:43:23.221687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:32.492 [2024-07-15 19:43:23.221713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:32.492 [2024-07-15 19:43:23.221726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:32.492 [2024-07-15 19:43:23.221737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:32.492 [2024-07-15 19:43:23.221750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:32.492 [2024-07-15 19:43:23.221760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:32.492 [2024-07-15 19:43:23.221773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:32.492 [2024-07-15 19:43:23.221793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:32.492 [2024-07-15 19:43:23.221807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:32.492 [2024-07-15 19:43:23.221818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:32.492 [2024-07-15 19:43:23.221880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:32.492 [2024-07-15 19:43:23.221891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:32.492 [2024-07-15 19:43:23.221915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:32.492 [2024-07-15 19:43:23.221928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:32.492 [2024-07-15 19:43:23.221939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:32.492 [2024-07-15 19:43:23.221952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.492 [2024-07-15 19:43:23.221962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:32.492 [2024-07-15 19:43:23.221975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:21:32.492 [2024-07-15 19:43:23.221984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.492 [2024-07-15 19:43:23.222069] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:32.492 [2024-07-15 19:43:23.222081] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:35.774 [2024-07-15 19:43:26.028196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.028462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:35.774 [2024-07-15 19:43:26.028571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2806.101 ms 00:21:35.774 [2024-07-15 19:43:26.028610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.069604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.069897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.774 [2024-07-15 19:43:26.070002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.617 ms 00:21:35.774 [2024-07-15 19:43:26.070040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.070230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.070336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.774 [2024-07-15 19:43:26.070444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:35.774 [2024-07-15 19:43:26.070478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.138290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.138493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.774 [2024-07-15 19:43:26.138617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.744 ms 00:21:35.774 [2024-07-15 19:43:26.138759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.139026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.139053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.774 [2024-07-15 19:43:26.139074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.774 [2024-07-15 19:43:26.139089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.139615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.139641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.774 [2024-07-15 19:43:26.139661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:21:35.774 [2024-07-15 19:43:26.139676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.139854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.139871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.774 [2024-07-15 19:43:26.139890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:35.774 [2024-07-15 19:43:26.139904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.166467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.166509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.774 [2024-07-15 19:43:26.166525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.511 ms 00:21:35.774 [2024-07-15 19:43:26.166536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.180361] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:35.774 [2024-07-15 19:43:26.197118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.197182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.774 [2024-07-15 19:43:26.197198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.443 ms 00:21:35.774 [2024-07-15 19:43:26.197211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.287037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.287110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:35.774 [2024-07-15 19:43:26.287128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.711 ms 00:21:35.774 [2024-07-15 19:43:26.287141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.287381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.287398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.774 [2024-07-15 19:43:26.287410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:35.774 [2024-07-15 19:43:26.287427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.329406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.329458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:35.774 [2024-07-15 19:43:26.329473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.943 ms 00:21:35.774 [2024-07-15 19:43:26.329486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.370771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.370836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:35.774 [2024-07-15 19:43:26.370851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.195 ms 00:21:35.774 [2024-07-15 19:43:26.370864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.371745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.371795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.774 [2024-07-15 19:43:26.371810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:21:35.774 [2024-07-15 19:43:26.371823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.485101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.485157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:35.774 [2024-07-15 19:43:26.485184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.234 ms 00:21:35.774 [2024-07-15 19:43:26.485203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.774 [2024-07-15 19:43:26.527480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.774 [2024-07-15 19:43:26.527531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:35.774 [2024-07-15 19:43:26.527547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.174 ms 00:21:35.774 [2024-07-15 19:43:26.527564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.034 [2024-07-15 19:43:26.566348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.034 [2024-07-15 19:43:26.566433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:36.034 [2024-07-15 19:43:26.566457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.693 ms 00:21:36.034 [2024-07-15 19:43:26.566477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.034 [2024-07-15 19:43:26.605380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.034 [2024-07-15 19:43:26.605432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:36.034 [2024-07-15 19:43:26.605448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.767 ms 00:21:36.034 [2024-07-15 19:43:26.605461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.034 [2024-07-15 19:43:26.605554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.034 [2024-07-15 19:43:26.605570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:36.034 [2024-07-15 19:43:26.605582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:36.034 [2024-07-15 19:43:26.605598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.034 [2024-07-15 19:43:26.605680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.034 [2024-07-15 19:43:26.605694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:36.034 [2024-07-15 19:43:26.605705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:36.034 [2024-07-15 19:43:26.605736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.034 [2024-07-15 19:43:26.606763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:36.034 [2024-07-15 19:43:26.611943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3408.429 ms, result 0 00:21:36.034 [2024-07-15 19:43:26.612917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.034 { 00:21:36.034 "name": "ftl0", 00:21:36.034 "uuid": "03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e" 00:21:36.034 } 00:21:36.034 19:43:26 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:36.034 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:21:36.034 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:36.034 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:21:36.034 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:36.034 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:36.034 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:36.293 19:43:26 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:36.611 [ 00:21:36.612 { 00:21:36.612 "name": "ftl0", 00:21:36.612 "aliases": [ 00:21:36.612 "03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e" 00:21:36.612 ], 00:21:36.612 "product_name": "FTL disk", 00:21:36.612 "block_size": 4096, 00:21:36.612 "num_blocks": 23592960, 00:21:36.612 "uuid": "03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e", 00:21:36.612 "assigned_rate_limits": { 00:21:36.612 "rw_ios_per_sec": 0, 00:21:36.612 "rw_mbytes_per_sec": 0, 00:21:36.612 "r_mbytes_per_sec": 0, 00:21:36.612 "w_mbytes_per_sec": 0 00:21:36.612 }, 00:21:36.612 "claimed": false, 00:21:36.612 "zoned": false, 00:21:36.612 "supported_io_types": { 00:21:36.612 "read": true, 00:21:36.612 "write": true, 00:21:36.612 "unmap": true, 00:21:36.612 "flush": true, 00:21:36.612 "reset": false, 00:21:36.612 "nvme_admin": false, 00:21:36.612 "nvme_io": false, 00:21:36.612 "nvme_io_md": false, 00:21:36.612 "write_zeroes": true, 00:21:36.612 "zcopy": false, 00:21:36.612 "get_zone_info": false, 00:21:36.612 "zone_management": false, 00:21:36.612 "zone_append": false, 00:21:36.612 "compare": false, 00:21:36.612 "compare_and_write": false, 00:21:36.612 "abort": false, 00:21:36.612 "seek_hole": false, 00:21:36.612 "seek_data": false, 00:21:36.612 "copy": false, 00:21:36.612 "nvme_iov_md": false 00:21:36.612 }, 00:21:36.612 "driver_specific": { 00:21:36.612 "ftl": { 00:21:36.612 "base_bdev": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:36.612 "cache": "nvc0n1p0" 00:21:36.612 } 00:21:36.612 } 00:21:36.612 } 00:21:36.612 ] 00:21:36.612 19:43:27 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:21:36.612 19:43:27 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:36.612 19:43:27 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:36.612 19:43:27 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:36.612 19:43:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:36.885 19:43:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:36.885 { 00:21:36.885 "name": "ftl0", 00:21:36.885 "aliases": [ 00:21:36.885 "03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e" 00:21:36.885 ], 00:21:36.885 "product_name": "FTL disk", 00:21:36.885 "block_size": 4096, 00:21:36.885 "num_blocks": 23592960, 00:21:36.885 "uuid": "03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e", 00:21:36.885 "assigned_rate_limits": { 00:21:36.885 "rw_ios_per_sec": 0, 00:21:36.885 "rw_mbytes_per_sec": 0, 00:21:36.885 "r_mbytes_per_sec": 0, 00:21:36.885 "w_mbytes_per_sec": 0 00:21:36.885 }, 00:21:36.885 "claimed": false, 00:21:36.885 "zoned": false, 00:21:36.885 "supported_io_types": { 00:21:36.885 "read": true, 00:21:36.885 "write": true, 00:21:36.885 "unmap": true, 00:21:36.885 "flush": true, 00:21:36.885 "reset": false, 00:21:36.885 "nvme_admin": false, 00:21:36.885 "nvme_io": false, 00:21:36.885 "nvme_io_md": false, 00:21:36.885 "write_zeroes": true, 00:21:36.885 "zcopy": false, 00:21:36.885 "get_zone_info": false, 00:21:36.885 "zone_management": false, 00:21:36.885 "zone_append": false, 00:21:36.885 "compare": false, 00:21:36.885 "compare_and_write": false, 00:21:36.885 "abort": false, 00:21:36.885 "seek_hole": false, 00:21:36.885 "seek_data": false, 00:21:36.885 "copy": false, 00:21:36.885 "nvme_iov_md": false 00:21:36.885 }, 00:21:36.885 "driver_specific": { 00:21:36.885 "ftl": { 00:21:36.885 "base_bdev": "94011795-adfd-48bf-8951-de7a76f29ffa", 00:21:36.885 "cache": "nvc0n1p0" 00:21:36.885 } 00:21:36.885 } 00:21:36.885 } 00:21:36.885 ]' 00:21:36.885 19:43:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:36.885 19:43:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:36.885 19:43:27 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:37.144 [2024-07-15 19:43:27.908063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.144 [2024-07-15 19:43:27.908115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:37.144 [2024-07-15 19:43:27.908133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:37.144 [2024-07-15 19:43:27.908145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.144 [2024-07-15 19:43:27.908184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:37.144 [2024-07-15 19:43:27.912228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.144 [2024-07-15 19:43:27.912266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:37.144 [2024-07-15 19:43:27.912279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:21:37.144 [2024-07-15 19:43:27.912298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.144 [2024-07-15 19:43:27.912869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.144 [2024-07-15 19:43:27.912891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:37.144 [2024-07-15 19:43:27.912903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:21:37.144 [2024-07-15 19:43:27.912918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.144 [2024-07-15 19:43:27.915824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.144 [2024-07-15 19:43:27.915852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:37.144 [2024-07-15 19:43:27.915863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:21:37.144 [2024-07-15 19:43:27.915875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.144 [2024-07-15 19:43:27.921614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.144 [2024-07-15 19:43:27.921650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:37.144 [2024-07-15 19:43:27.921662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.707 ms 00:21:37.144 [2024-07-15 19:43:27.921674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:27.961045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:27.961091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:37.403 [2024-07-15 19:43:27.961105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.289 ms 00:21:37.403 [2024-07-15 19:43:27.961121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:27.985137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:27.985186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:37.403 [2024-07-15 19:43:27.985201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.930 ms 00:21:37.403 [2024-07-15 19:43:27.985217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:27.985443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:27.985460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:37.403 [2024-07-15 19:43:27.985472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:21:37.403 [2024-07-15 19:43:27.985484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:28.022975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:28.023017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:37.403 [2024-07-15 19:43:28.023031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.461 ms 00:21:37.403 [2024-07-15 19:43:28.023043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:28.060660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:28.060702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:37.403 [2024-07-15 19:43:28.060716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.534 ms 00:21:37.403 [2024-07-15 19:43:28.060731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:28.099346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:28.099384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:37.403 [2024-07-15 19:43:28.099398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.518 ms 00:21:37.403 [2024-07-15 19:43:28.099410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:28.139534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.403 [2024-07-15 19:43:28.139579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:37.403 [2024-07-15 19:43:28.139609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.984 ms 00:21:37.403 [2024-07-15 19:43:28.139623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.403 [2024-07-15 19:43:28.139717] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:37.403 [2024-07-15 19:43:28.139739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.139988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:37.403 [2024-07-15 19:43:28.140547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.140992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.404 [2024-07-15 19:43:28.141185] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.404 [2024-07-15 19:43:28.141196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:21:37.404 [2024-07-15 19:43:28.141213] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:37.404 [2024-07-15 19:43:28.141226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:37.404 [2024-07-15 19:43:28.141239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:37.404 [2024-07-15 19:43:28.141249] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:37.404 [2024-07-15 19:43:28.141262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.404 [2024-07-15 19:43:28.141272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.404 [2024-07-15 19:43:28.141284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.404 [2024-07-15 19:43:28.141293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.404 [2024-07-15 19:43:28.141304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.404 [2024-07-15 19:43:28.141314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-07-15 19:43:28.141327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.404 [2024-07-15 19:43:28.141338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:21:37.404 [2024-07-15 19:43:28.141350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-07-15 19:43:28.163373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-07-15 19:43:28.163425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.404 [2024-07-15 19:43:28.163439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.986 ms 00:21:37.404 [2024-07-15 19:43:28.163456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.404 [2024-07-15 19:43:28.164073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.404 [2024-07-15 19:43:28.164095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.404 [2024-07-15 19:43:28.164107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:21:37.404 [2024-07-15 19:43:28.164119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.662 [2024-07-15 19:43:28.238202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.662 [2024-07-15 19:43:28.238259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.662 [2024-07-15 19:43:28.238273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.662 [2024-07-15 19:43:28.238286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.662 [2024-07-15 19:43:28.238425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.662 [2024-07-15 19:43:28.238441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.662 [2024-07-15 19:43:28.238452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.662 [2024-07-15 19:43:28.238465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.662 [2024-07-15 19:43:28.238541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.662 [2024-07-15 19:43:28.238559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.663 [2024-07-15 19:43:28.238570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.663 [2024-07-15 19:43:28.238585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.663 [2024-07-15 19:43:28.238617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.663 [2024-07-15 19:43:28.238630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.663 [2024-07-15 19:43:28.238640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.663 [2024-07-15 19:43:28.238652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.663 [2024-07-15 19:43:28.371588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.663 [2024-07-15 19:43:28.371657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.663 [2024-07-15 19:43:28.371674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.663 [2024-07-15 19:43:28.371687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.478731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.478818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.922 [2024-07-15 19:43:28.478836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.478867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.478981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.479002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.922 [2024-07-15 19:43:28.479013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.479030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.479085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.479100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.922 [2024-07-15 19:43:28.479111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.479125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.479262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.479281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.922 [2024-07-15 19:43:28.479312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.479326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.479384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.479401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.922 [2024-07-15 19:43:28.479412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.479426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.479478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.479493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.922 [2024-07-15 19:43:28.479507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.479523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.479584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.922 [2024-07-15 19:43:28.479599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.922 [2024-07-15 19:43:28.479610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.922 [2024-07-15 19:43:28.479624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.922 [2024-07-15 19:43:28.479842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.747 ms, result 0 00:21:37.922 true 00:21:37.922 19:43:28 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 81261 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81261 ']' 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81261 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81261 00:21:37.922 killing process with pid 81261 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81261' 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81261 00:21:37.922 19:43:28 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81261 00:21:43.188 19:43:33 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:43.765 65536+0 records in 00:21:43.765 65536+0 records out 00:21:43.765 268435456 bytes (268 MB, 256 MiB) copied, 1.04291 s, 257 MB/s 00:21:43.765 19:43:34 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:44.062 [2024-07-15 19:43:34.611192] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:44.062 [2024-07-15 19:43:34.611323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81469 ] 00:21:44.062 [2024-07-15 19:43:34.780467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.321 [2024-07-15 19:43:35.051085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.887 [2024-07-15 19:43:35.447958] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:44.887 [2024-07-15 19:43:35.448032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:44.887 [2024-07-15 19:43:35.611322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.611376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:44.887 [2024-07-15 19:43:35.611392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:44.887 [2024-07-15 19:43:35.611403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.614685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.614726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:44.887 [2024-07-15 19:43:35.614738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:21:44.887 [2024-07-15 19:43:35.614748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.614860] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:44.887 [2024-07-15 19:43:35.616029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:44.887 [2024-07-15 19:43:35.616056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.616067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:44.887 [2024-07-15 19:43:35.616078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:21:44.887 [2024-07-15 19:43:35.616088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.617652] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:44.887 [2024-07-15 19:43:35.639171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.639222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:44.887 [2024-07-15 19:43:35.639242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.520 ms 00:21:44.887 [2024-07-15 19:43:35.639253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.639352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.639367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:44.887 [2024-07-15 19:43:35.639378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:44.887 [2024-07-15 19:43:35.639388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.646231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.646262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:44.887 [2024-07-15 19:43:35.646274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.801 ms 00:21:44.887 [2024-07-15 19:43:35.646300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.646405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.646420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:44.887 [2024-07-15 19:43:35.646432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:44.887 [2024-07-15 19:43:35.646442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.646475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.646486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:44.887 [2024-07-15 19:43:35.646496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:44.887 [2024-07-15 19:43:35.646509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.887 [2024-07-15 19:43:35.646533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:44.887 [2024-07-15 19:43:35.652646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.887 [2024-07-15 19:43:35.652794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:44.887 [2024-07-15 19:43:35.652908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:21:44.887 [2024-07-15 19:43:35.652945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.888 [2024-07-15 19:43:35.653043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.888 [2024-07-15 19:43:35.653080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:44.888 [2024-07-15 19:43:35.653111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:44.888 [2024-07-15 19:43:35.653203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.888 [2024-07-15 19:43:35.653259] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:44.888 [2024-07-15 19:43:35.653306] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:44.888 [2024-07-15 19:43:35.653383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:44.888 [2024-07-15 19:43:35.653498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:44.888 [2024-07-15 19:43:35.653627] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:44.888 [2024-07-15 19:43:35.653677] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:44.888 [2024-07-15 19:43:35.653792] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:44.888 [2024-07-15 19:43:35.653851] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:44.888 [2024-07-15 19:43:35.653901] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:44.888 [2024-07-15 19:43:35.653948] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:44.888 [2024-07-15 19:43:35.654025] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:44.888 [2024-07-15 19:43:35.654059] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:44.888 [2024-07-15 19:43:35.654089] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:44.888 [2024-07-15 19:43:35.654120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.888 [2024-07-15 19:43:35.654149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:44.888 [2024-07-15 19:43:35.654179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:21:44.888 [2024-07-15 19:43:35.654248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.888 [2024-07-15 19:43:35.654355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.888 [2024-07-15 19:43:35.654459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:44.888 [2024-07-15 19:43:35.654552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:44.888 [2024-07-15 19:43:35.654589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.888 [2024-07-15 19:43:35.654695] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:44.888 [2024-07-15 19:43:35.654729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:44.888 [2024-07-15 19:43:35.654759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:44.888 [2024-07-15 19:43:35.654863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.654937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:44.888 [2024-07-15 19:43:35.654966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.654994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:44.888 [2024-07-15 19:43:35.655052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:44.888 [2024-07-15 19:43:35.655108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:44.888 [2024-07-15 19:43:35.655136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:44.888 [2024-07-15 19:43:35.655163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:44.888 [2024-07-15 19:43:35.655174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:44.888 [2024-07-15 19:43:35.655184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:44.888 [2024-07-15 19:43:35.655193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:44.888 [2024-07-15 19:43:35.655211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:44.888 [2024-07-15 19:43:35.655249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:44.888 [2024-07-15 19:43:35.655277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:44.888 [2024-07-15 19:43:35.655305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:44.888 [2024-07-15 19:43:35.655336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:44.888 [2024-07-15 19:43:35.655364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:44.888 [2024-07-15 19:43:35.655382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:44.888 [2024-07-15 19:43:35.655391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:44.888 [2024-07-15 19:43:35.655400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:44.888 [2024-07-15 19:43:35.655409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:44.888 [2024-07-15 19:43:35.655418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:44.888 [2024-07-15 19:43:35.655427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:44.888 [2024-07-15 19:43:35.655445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:44.888 [2024-07-15 19:43:35.655454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655463] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:44.888 [2024-07-15 19:43:35.655473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:44.888 [2024-07-15 19:43:35.655483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.888 [2024-07-15 19:43:35.655502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:44.888 [2024-07-15 19:43:35.655512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:44.888 [2024-07-15 19:43:35.655522] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:44.888 [2024-07-15 19:43:35.655531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:44.888 [2024-07-15 19:43:35.655540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:44.888 [2024-07-15 19:43:35.655549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:44.888 [2024-07-15 19:43:35.655560] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:44.888 [2024-07-15 19:43:35.655577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:44.888 [2024-07-15 19:43:35.655599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:44.888 [2024-07-15 19:43:35.655609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:44.888 [2024-07-15 19:43:35.655620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:44.888 [2024-07-15 19:43:35.655630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:44.888 [2024-07-15 19:43:35.655642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:44.888 [2024-07-15 19:43:35.655652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:44.888 [2024-07-15 19:43:35.655663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:44.888 [2024-07-15 19:43:35.655673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:44.888 [2024-07-15 19:43:35.655683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:44.888 [2024-07-15 19:43:35.655735] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:44.888 [2024-07-15 19:43:35.655746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:44.888 [2024-07-15 19:43:35.655767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:44.888 [2024-07-15 19:43:35.655789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:44.888 [2024-07-15 19:43:35.655800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:44.888 [2024-07-15 19:43:35.655812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.888 [2024-07-15 19:43:35.655822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:44.888 [2024-07-15 19:43:35.655832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:21:44.889 [2024-07-15 19:43:35.655842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.710701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.710923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.148 [2024-07-15 19:43:35.711006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.799 ms 00:21:45.148 [2024-07-15 19:43:35.711043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.711233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.711328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:45.148 [2024-07-15 19:43:35.711367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:45.148 [2024-07-15 19:43:35.711402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.762610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.762914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.148 [2024-07-15 19:43:35.762999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.115 ms 00:21:45.148 [2024-07-15 19:43:35.763036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.763187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.763224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.148 [2024-07-15 19:43:35.763314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:45.148 [2024-07-15 19:43:35.763349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.763864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.764010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.148 [2024-07-15 19:43:35.764083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:21:45.148 [2024-07-15 19:43:35.764118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.764279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.764319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.148 [2024-07-15 19:43:35.764391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:45.148 [2024-07-15 19:43:35.764425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.787438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.787743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.148 [2024-07-15 19:43:35.787842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.959 ms 00:21:45.148 [2024-07-15 19:43:35.787882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.809999] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:45.148 [2024-07-15 19:43:35.810200] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:45.148 [2024-07-15 19:43:35.810241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.810255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:45.148 [2024-07-15 19:43:35.810270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.157 ms 00:21:45.148 [2024-07-15 19:43:35.810281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.843623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.843705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:45.148 [2024-07-15 19:43:35.843723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.184 ms 00:21:45.148 [2024-07-15 19:43:35.843734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.866040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.866100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:45.148 [2024-07-15 19:43:35.866117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.151 ms 00:21:45.148 [2024-07-15 19:43:35.866127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.887530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.887585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:45.148 [2024-07-15 19:43:35.887604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.255 ms 00:21:45.148 [2024-07-15 19:43:35.887618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.148 [2024-07-15 19:43:35.888543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.148 [2024-07-15 19:43:35.888590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:45.148 [2024-07-15 19:43:35.888611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:21:45.148 [2024-07-15 19:43:35.888626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:35.981122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:35.981184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:45.405 [2024-07-15 19:43:35.981207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.454 ms 00:21:45.405 [2024-07-15 19:43:35.981218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:35.994711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:45.405 [2024-07-15 19:43:36.014831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.014913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:45.405 [2024-07-15 19:43:36.014940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.472 ms 00:21:45.405 [2024-07-15 19:43:36.014958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.015115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.015131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:45.405 [2024-07-15 19:43:36.015150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:45.405 [2024-07-15 19:43:36.015171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.015255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.015275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:45.405 [2024-07-15 19:43:36.015291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:45.405 [2024-07-15 19:43:36.015306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.015344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.015361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:45.405 [2024-07-15 19:43:36.015378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:45.405 [2024-07-15 19:43:36.015409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.015473] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:45.405 [2024-07-15 19:43:36.015491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.015508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:45.405 [2024-07-15 19:43:36.015523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:45.405 [2024-07-15 19:43:36.015538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.054685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.054895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:45.405 [2024-07-15 19:43:36.054979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.117 ms 00:21:45.405 [2024-07-15 19:43:36.055024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.055198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.405 [2024-07-15 19:43:36.055244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:45.405 [2024-07-15 19:43:36.055331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:45.405 [2024-07-15 19:43:36.055366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.405 [2024-07-15 19:43:36.056364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.405 [2024-07-15 19:43:36.061769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.747 ms, result 0 00:21:45.405 [2024-07-15 19:43:36.062632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:45.405 [2024-07-15 19:43:36.082621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:54.559  Copying: 27/256 [MB] (27 MBps) Copying: 55/256 [MB] (27 MBps) Copying: 83/256 [MB] (28 MBps) Copying: 112/256 [MB] (29 MBps) Copying: 141/256 [MB] (28 MBps) Copying: 167/256 [MB] (26 MBps) Copying: 196/256 [MB] (28 MBps) Copying: 226/256 [MB] (30 MBps) Copying: 255/256 [MB] (28 MBps) Copying: 256/256 [MB] (average 28 MBps)[2024-07-15 19:43:45.106669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:54.559 [2024-07-15 19:43:45.122366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.122418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:54.559 [2024-07-15 19:43:45.122433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.559 [2024-07-15 19:43:45.122444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.122467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:54.559 [2024-07-15 19:43:45.126519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.126654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:54.559 [2024-07-15 19:43:45.126751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.033 ms 00:21:54.559 [2024-07-15 19:43:45.126813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.128630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.128772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:54.559 [2024-07-15 19:43:45.128805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:21:54.559 [2024-07-15 19:43:45.128817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.135800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.135838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:54.559 [2024-07-15 19:43:45.135851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.957 ms 00:21:54.559 [2024-07-15 19:43:45.135860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.141620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.141654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:54.559 [2024-07-15 19:43:45.141666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.700 ms 00:21:54.559 [2024-07-15 19:43:45.141676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.180185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.180221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:54.559 [2024-07-15 19:43:45.180236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.462 ms 00:21:54.559 [2024-07-15 19:43:45.180245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.202743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.202795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:54.559 [2024-07-15 19:43:45.202810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.443 ms 00:21:54.559 [2024-07-15 19:43:45.202821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.202973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.202991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:54.559 [2024-07-15 19:43:45.203003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:54.559 [2024-07-15 19:43:45.203014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.559 [2024-07-15 19:43:45.241879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.559 [2024-07-15 19:43:45.241916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:54.559 [2024-07-15 19:43:45.241930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.846 ms 00:21:54.560 [2024-07-15 19:43:45.241940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.560 [2024-07-15 19:43:45.281229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.560 [2024-07-15 19:43:45.281282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:54.560 [2024-07-15 19:43:45.281297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.231 ms 00:21:54.560 [2024-07-15 19:43:45.281308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.560 [2024-07-15 19:43:45.320551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.560 [2024-07-15 19:43:45.320609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:54.560 [2024-07-15 19:43:45.320625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.173 ms 00:21:54.560 [2024-07-15 19:43:45.320635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.818 [2024-07-15 19:43:45.357560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.818 [2024-07-15 19:43:45.357610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:54.818 [2024-07-15 19:43:45.357626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.809 ms 00:21:54.818 [2024-07-15 19:43:45.357635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.818 [2024-07-15 19:43:45.357701] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:54.818 [2024-07-15 19:43:45.357721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.357998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:54.818 [2024-07-15 19:43:45.358197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:54.819 [2024-07-15 19:43:45.358837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:54.819 [2024-07-15 19:43:45.358854] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:21:54.819 [2024-07-15 19:43:45.358882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:54.819 [2024-07-15 19:43:45.358893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:54.819 [2024-07-15 19:43:45.358903] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:54.819 [2024-07-15 19:43:45.358927] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:54.819 [2024-07-15 19:43:45.358938] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:54.819 [2024-07-15 19:43:45.358949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:54.819 [2024-07-15 19:43:45.358960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:54.819 [2024-07-15 19:43:45.358970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:54.819 [2024-07-15 19:43:45.358979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:54.819 [2024-07-15 19:43:45.358990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.819 [2024-07-15 19:43:45.359001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:54.819 [2024-07-15 19:43:45.359013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:21:54.819 [2024-07-15 19:43:45.359023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.819 [2024-07-15 19:43:45.381032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.819 [2024-07-15 19:43:45.381093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:54.819 [2024-07-15 19:43:45.381109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.979 ms 00:21:54.819 [2024-07-15 19:43:45.381119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.819 [2024-07-15 19:43:45.381656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.819 [2024-07-15 19:43:45.381671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:54.819 [2024-07-15 19:43:45.381682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:21:54.819 [2024-07-15 19:43:45.381701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.819 [2024-07-15 19:43:45.433691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.819 [2024-07-15 19:43:45.433748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.819 [2024-07-15 19:43:45.433763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.819 [2024-07-15 19:43:45.433773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.819 [2024-07-15 19:43:45.433903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.819 [2024-07-15 19:43:45.433915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.819 [2024-07-15 19:43:45.433926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.819 [2024-07-15 19:43:45.433947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.819 [2024-07-15 19:43:45.434005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.820 [2024-07-15 19:43:45.434018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.820 [2024-07-15 19:43:45.434028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.820 [2024-07-15 19:43:45.434038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.820 [2024-07-15 19:43:45.434057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.820 [2024-07-15 19:43:45.434067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.820 [2024-07-15 19:43:45.434077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.820 [2024-07-15 19:43:45.434088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.820 [2024-07-15 19:43:45.559114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.820 [2024-07-15 19:43:45.559170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:54.820 [2024-07-15 19:43:45.559184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.820 [2024-07-15 19:43:45.559195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.077 [2024-07-15 19:43:45.668633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.077 [2024-07-15 19:43:45.668685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.077 [2024-07-15 19:43:45.668700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.077 [2024-07-15 19:43:45.668711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.077 [2024-07-15 19:43:45.668807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.077 [2024-07-15 19:43:45.668819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.077 [2024-07-15 19:43:45.668830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.077 [2024-07-15 19:43:45.668840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.077 [2024-07-15 19:43:45.668870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.077 [2024-07-15 19:43:45.668881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.078 [2024-07-15 19:43:45.668891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.078 [2024-07-15 19:43:45.668900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.078 [2024-07-15 19:43:45.669021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.078 [2024-07-15 19:43:45.669034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.078 [2024-07-15 19:43:45.669045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.078 [2024-07-15 19:43:45.669055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.078 [2024-07-15 19:43:45.669094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.078 [2024-07-15 19:43:45.669107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.078 [2024-07-15 19:43:45.669117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.078 [2024-07-15 19:43:45.669127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.078 [2024-07-15 19:43:45.669167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.078 [2024-07-15 19:43:45.669183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.078 [2024-07-15 19:43:45.669193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.078 [2024-07-15 19:43:45.669203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.078 [2024-07-15 19:43:45.669247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.078 [2024-07-15 19:43:45.669263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.078 [2024-07-15 19:43:45.669273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.078 [2024-07-15 19:43:45.669283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.078 [2024-07-15 19:43:45.669422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.045 ms, result 0 00:21:56.454 00:21:56.454 00:21:56.454 19:43:47 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81598 00:21:56.454 19:43:47 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:56.454 19:43:47 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81598 00:21:56.454 19:43:47 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81598 ']' 00:21:56.454 19:43:47 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.454 19:43:47 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.454 19:43:47 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.454 19:43:47 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.454 19:43:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:56.454 [2024-07-15 19:43:47.205400] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:56.454 [2024-07-15 19:43:47.205587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81598 ] 00:21:56.713 [2024-07-15 19:43:47.389294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.971 [2024-07-15 19:43:47.628539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.907 19:43:48 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.907 19:43:48 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:57.907 19:43:48 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:58.165 [2024-07-15 19:43:48.837280] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.165 [2024-07-15 19:43:48.837351] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.424 [2024-07-15 19:43:49.016807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.016869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.424 [2024-07-15 19:43:49.016901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.424 [2024-07-15 19:43:49.016916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.021055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.021100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.424 [2024-07-15 19:43:49.021113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:21:58.424 [2024-07-15 19:43:49.021126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.021227] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.424 [2024-07-15 19:43:49.022317] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.424 [2024-07-15 19:43:49.022351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.022366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.424 [2024-07-15 19:43:49.022386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:21:58.424 [2024-07-15 19:43:49.022400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.024063] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.424 [2024-07-15 19:43:49.043747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.043797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.424 [2024-07-15 19:43:49.043819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:21:58.424 [2024-07-15 19:43:49.043830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.043939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.043954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.424 [2024-07-15 19:43:49.043970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:58.424 [2024-07-15 19:43:49.043981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.050756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.050796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.424 [2024-07-15 19:43:49.050820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.712 ms 00:21:58.424 [2024-07-15 19:43:49.050831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.050973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.050989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.424 [2024-07-15 19:43:49.051006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:58.424 [2024-07-15 19:43:49.051017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.051063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.051074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.424 [2024-07-15 19:43:49.051088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:58.424 [2024-07-15 19:43:49.051098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.051127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:58.424 [2024-07-15 19:43:49.056568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.056605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.424 [2024-07-15 19:43:49.056617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.449 ms 00:21:58.424 [2024-07-15 19:43:49.056630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.056697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.424 [2024-07-15 19:43:49.056714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.424 [2024-07-15 19:43:49.056725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:58.424 [2024-07-15 19:43:49.056760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.424 [2024-07-15 19:43:49.056802] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.424 [2024-07-15 19:43:49.056830] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.424 [2024-07-15 19:43:49.056880] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.424 [2024-07-15 19:43:49.056904] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:58.424 [2024-07-15 19:43:49.056990] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.424 [2024-07-15 19:43:49.057008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.424 [2024-07-15 19:43:49.057024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:58.424 [2024-07-15 19:43:49.057040] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.424 [2024-07-15 19:43:49.057052] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.424 [2024-07-15 19:43:49.057066] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:58.425 [2024-07-15 19:43:49.057076] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.425 [2024-07-15 19:43:49.057089] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.425 [2024-07-15 19:43:49.057098] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.425 [2024-07-15 19:43:49.057114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.425 [2024-07-15 19:43:49.057124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.425 [2024-07-15 19:43:49.057137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:21:58.425 [2024-07-15 19:43:49.057147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.425 [2024-07-15 19:43:49.057226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.425 [2024-07-15 19:43:49.057236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.425 [2024-07-15 19:43:49.057249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:58.425 [2024-07-15 19:43:49.057276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.425 [2024-07-15 19:43:49.057388] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.425 [2024-07-15 19:43:49.057401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.425 [2024-07-15 19:43:49.057414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.425 [2024-07-15 19:43:49.057446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.425 [2024-07-15 19:43:49.057484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.425 [2024-07-15 19:43:49.057505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.425 [2024-07-15 19:43:49.057514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:58.425 [2024-07-15 19:43:49.057527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.425 [2024-07-15 19:43:49.057537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.425 [2024-07-15 19:43:49.057549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:58.425 [2024-07-15 19:43:49.057559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.425 [2024-07-15 19:43:49.057581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.425 [2024-07-15 19:43:49.057613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.425 [2024-07-15 19:43:49.057643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.425 [2024-07-15 19:43:49.057679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.425 [2024-07-15 19:43:49.057718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.425 [2024-07-15 19:43:49.057752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.425 [2024-07-15 19:43:49.057773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.425 [2024-07-15 19:43:49.057783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:58.425 [2024-07-15 19:43:49.057794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.425 [2024-07-15 19:43:49.057814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.425 [2024-07-15 19:43:49.057827] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:58.425 [2024-07-15 19:43:49.057837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.425 [2024-07-15 19:43:49.057860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:58.425 [2024-07-15 19:43:49.057872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057881] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.425 [2024-07-15 19:43:49.057897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.425 [2024-07-15 19:43:49.057907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.425 [2024-07-15 19:43:49.057929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.425 [2024-07-15 19:43:49.057941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.425 [2024-07-15 19:43:49.057951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.425 [2024-07-15 19:43:49.057963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.425 [2024-07-15 19:43:49.057972] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.425 [2024-07-15 19:43:49.057984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.425 [2024-07-15 19:43:49.057994] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.425 [2024-07-15 19:43:49.058009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:58.425 [2024-07-15 19:43:49.058037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:58.425 [2024-07-15 19:43:49.058048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:58.425 [2024-07-15 19:43:49.058060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:58.425 [2024-07-15 19:43:49.058071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:58.425 [2024-07-15 19:43:49.058083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:58.425 [2024-07-15 19:43:49.058094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:58.425 [2024-07-15 19:43:49.058106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:58.425 [2024-07-15 19:43:49.058116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:58.425 [2024-07-15 19:43:49.058129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:58.425 [2024-07-15 19:43:49.058185] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.425 [2024-07-15 19:43:49.058198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.425 [2024-07-15 19:43:49.058225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.425 [2024-07-15 19:43:49.058236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.425 [2024-07-15 19:43:49.058248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.425 [2024-07-15 19:43:49.058259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.425 [2024-07-15 19:43:49.058272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.425 [2024-07-15 19:43:49.058283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:21:58.425 [2024-07-15 19:43:49.058295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.425 [2024-07-15 19:43:49.102643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.425 [2024-07-15 19:43:49.102695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.425 [2024-07-15 19:43:49.102710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.283 ms 00:21:58.425 [2024-07-15 19:43:49.102733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.102881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.102901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.426 [2024-07-15 19:43:49.102913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:58.426 [2024-07-15 19:43:49.102928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.155383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.155447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.426 [2024-07-15 19:43:49.155462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.430 ms 00:21:58.426 [2024-07-15 19:43:49.155475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.155552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.155566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.426 [2024-07-15 19:43:49.155577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:58.426 [2024-07-15 19:43:49.155590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.156037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.156054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.426 [2024-07-15 19:43:49.156070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:21:58.426 [2024-07-15 19:43:49.156083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.156198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.156214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.426 [2024-07-15 19:43:49.156224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:58.426 [2024-07-15 19:43:49.156237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.180572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.180617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.426 [2024-07-15 19:43:49.180631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.314 ms 00:21:58.426 [2024-07-15 19:43:49.180644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.426 [2024-07-15 19:43:49.201604] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:58.426 [2024-07-15 19:43:49.201649] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:58.426 [2024-07-15 19:43:49.201665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.426 [2024-07-15 19:43:49.201681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:58.426 [2024-07-15 19:43:49.201693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.908 ms 00:21:58.426 [2024-07-15 19:43:49.201707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.232387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.232437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:58.685 [2024-07-15 19:43:49.232451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.580 ms 00:21:58.685 [2024-07-15 19:43:49.232466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.252130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.252181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:58.685 [2024-07-15 19:43:49.252208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.587 ms 00:21:58.685 [2024-07-15 19:43:49.252230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.271242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.271291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:58.685 [2024-07-15 19:43:49.271305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.931 ms 00:21:58.685 [2024-07-15 19:43:49.271321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.272275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.272307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.685 [2024-07-15 19:43:49.272321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:21:58.685 [2024-07-15 19:43:49.272420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.387958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.388023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:58.685 [2024-07-15 19:43:49.388040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.506 ms 00:21:58.685 [2024-07-15 19:43:49.388054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.400317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:58.685 [2024-07-15 19:43:49.417068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.417127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.685 [2024-07-15 19:43:49.417149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.893 ms 00:21:58.685 [2024-07-15 19:43:49.417165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.417279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.417294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:58.685 [2024-07-15 19:43:49.417308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:58.685 [2024-07-15 19:43:49.417318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.417380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.417391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.685 [2024-07-15 19:43:49.417404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:58.685 [2024-07-15 19:43:49.417415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.417445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.417456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.685 [2024-07-15 19:43:49.417473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.685 [2024-07-15 19:43:49.417483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.417520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:58.685 [2024-07-15 19:43:49.417532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.417547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:58.685 [2024-07-15 19:43:49.417558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:58.685 [2024-07-15 19:43:49.417570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.460822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.461104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.685 [2024-07-15 19:43:49.461201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.222 ms 00:21:58.685 [2024-07-15 19:43:49.461242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.461476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.685 [2024-07-15 19:43:49.461591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.685 [2024-07-15 19:43:49.461630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:58.685 [2024-07-15 19:43:49.461663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.685 [2024-07-15 19:43:49.462860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.685 [2024-07-15 19:43:49.468857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 445.692 ms, result 0 00:21:58.685 [2024-07-15 19:43:49.470185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.944 Some configs were skipped because the RPC state that can call them passed over. 00:21:58.944 19:43:49 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:59.203 [2024-07-15 19:43:49.751347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.203 [2024-07-15 19:43:49.751403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:59.203 [2024-07-15 19:43:49.751428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:21:59.203 [2024-07-15 19:43:49.751440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.203 [2024-07-15 19:43:49.751492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.692 ms, result 0 00:21:59.203 true 00:21:59.203 19:43:49 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:59.203 [2024-07-15 19:43:49.991336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.203 [2024-07-15 19:43:49.991406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:59.203 [2024-07-15 19:43:49.991424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:21:59.203 [2024-07-15 19:43:49.991438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.203 [2024-07-15 19:43:49.991483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.419 ms, result 0 00:21:59.462 true 00:21:59.462 19:43:50 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81598 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81598 ']' 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81598 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81598 00:21:59.462 killing process with pid 81598 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81598' 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81598 00:21:59.462 19:43:50 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81598 00:22:00.842 [2024-07-15 19:43:51.226261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.842 [2024-07-15 19:43:51.226326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.842 [2024-07-15 19:43:51.226359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.842 [2024-07-15 19:43:51.226370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.842 [2024-07-15 19:43:51.226406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:00.842 [2024-07-15 19:43:51.230416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.842 [2024-07-15 19:43:51.230458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.842 [2024-07-15 19:43:51.230472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.976 ms 00:22:00.842 [2024-07-15 19:43:51.230487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.842 [2024-07-15 19:43:51.230738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.842 [2024-07-15 19:43:51.230754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.842 [2024-07-15 19:43:51.230765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:22:00.842 [2024-07-15 19:43:51.230794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.842 [2024-07-15 19:43:51.234239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.842 [2024-07-15 19:43:51.234282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.842 [2024-07-15 19:43:51.234297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:22:00.842 [2024-07-15 19:43:51.234309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.842 [2024-07-15 19:43:51.240198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.842 [2024-07-15 19:43:51.240234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.842 [2024-07-15 19:43:51.240246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.852 ms 00:22:00.842 [2024-07-15 19:43:51.240261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.842 [2024-07-15 19:43:51.256742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.842 [2024-07-15 19:43:51.256786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.842 [2024-07-15 19:43:51.256800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.429 ms 00:22:00.842 [2024-07-15 19:43:51.256814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.267535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.843 [2024-07-15 19:43:51.267574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.843 [2024-07-15 19:43:51.267591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.656 ms 00:22:00.843 [2024-07-15 19:43:51.267604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.267764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.843 [2024-07-15 19:43:51.267780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.843 [2024-07-15 19:43:51.267808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:00.843 [2024-07-15 19:43:51.267833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.284682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.843 [2024-07-15 19:43:51.284722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:00.843 [2024-07-15 19:43:51.284735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.829 ms 00:22:00.843 [2024-07-15 19:43:51.284748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.300378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.843 [2024-07-15 19:43:51.300419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:00.843 [2024-07-15 19:43:51.300431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.574 ms 00:22:00.843 [2024-07-15 19:43:51.300452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.316437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.843 [2024-07-15 19:43:51.316498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.843 [2024-07-15 19:43:51.316512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.944 ms 00:22:00.843 [2024-07-15 19:43:51.316525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.332678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.843 [2024-07-15 19:43:51.332718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.843 [2024-07-15 19:43:51.332731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.072 ms 00:22:00.843 [2024-07-15 19:43:51.332743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.843 [2024-07-15 19:43:51.332788] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.843 [2024-07-15 19:43:51.332810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.332991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.843 [2024-07-15 19:43:51.333769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.333992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.844 [2024-07-15 19:43:51.334105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.844 [2024-07-15 19:43:51.334115] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:22:00.844 [2024-07-15 19:43:51.334136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.844 [2024-07-15 19:43:51.334147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.844 [2024-07-15 19:43:51.334159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.844 [2024-07-15 19:43:51.334170] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.844 [2024-07-15 19:43:51.334182] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.844 [2024-07-15 19:43:51.334193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.844 [2024-07-15 19:43:51.334205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.844 [2024-07-15 19:43:51.334214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.844 [2024-07-15 19:43:51.334240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.844 [2024-07-15 19:43:51.334251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.844 [2024-07-15 19:43:51.334264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.844 [2024-07-15 19:43:51.334274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:22:00.844 [2024-07-15 19:43:51.334287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.355474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.844 [2024-07-15 19:43:51.355531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.844 [2024-07-15 19:43:51.355545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.150 ms 00:22:00.844 [2024-07-15 19:43:51.355562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.356153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.844 [2024-07-15 19:43:51.356170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.844 [2024-07-15 19:43:51.356184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:22:00.844 [2024-07-15 19:43:51.356201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.423642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.844 [2024-07-15 19:43:51.423708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.844 [2024-07-15 19:43:51.423723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.844 [2024-07-15 19:43:51.423737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.423869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.844 [2024-07-15 19:43:51.423885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.844 [2024-07-15 19:43:51.423896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.844 [2024-07-15 19:43:51.423913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.423968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.844 [2024-07-15 19:43:51.423984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.844 [2024-07-15 19:43:51.423995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.844 [2024-07-15 19:43:51.424011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.424031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.844 [2024-07-15 19:43:51.424045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.844 [2024-07-15 19:43:51.424055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.844 [2024-07-15 19:43:51.424084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.844 [2024-07-15 19:43:51.548248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.844 [2024-07-15 19:43:51.548332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.844 [2024-07-15 19:43:51.548349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.844 [2024-07-15 19:43:51.548363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.103 [2024-07-15 19:43:51.655636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.655707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.104 [2024-07-15 19:43:51.655723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.655738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.655856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.655874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.104 [2024-07-15 19:43:51.655888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.655905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.655938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.655952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.104 [2024-07-15 19:43:51.655964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.655978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.656103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.656120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.104 [2024-07-15 19:43:51.656133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.656146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.656187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.656204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.104 [2024-07-15 19:43:51.656216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.656229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.656270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.656288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.104 [2024-07-15 19:43:51.656308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.656325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.656372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.104 [2024-07-15 19:43:51.656387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.104 [2024-07-15 19:43:51.656398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.104 [2024-07-15 19:43:51.656412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.104 [2024-07-15 19:43:51.656557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 430.273 ms, result 0 00:22:02.062 19:43:52 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:02.062 19:43:52 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:02.321 [2024-07-15 19:43:52.892672] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:02.321 [2024-07-15 19:43:52.892879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81663 ] 00:22:02.321 [2024-07-15 19:43:53.073748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.587 [2024-07-15 19:43:53.309905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.157 [2024-07-15 19:43:53.713727] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.157 [2024-07-15 19:43:53.713810] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.157 [2024-07-15 19:43:53.876160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.157 [2024-07-15 19:43:53.876230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.157 [2024-07-15 19:43:53.876246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.157 [2024-07-15 19:43:53.876257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.157 [2024-07-15 19:43:53.879746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.157 [2024-07-15 19:43:53.879798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.157 [2024-07-15 19:43:53.879811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.465 ms 00:22:03.157 [2024-07-15 19:43:53.879821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.157 [2024-07-15 19:43:53.879937] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.157 [2024-07-15 19:43:53.881024] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.157 [2024-07-15 19:43:53.881060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.157 [2024-07-15 19:43:53.881071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.157 [2024-07-15 19:43:53.881083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:22:03.157 [2024-07-15 19:43:53.881093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.157 [2024-07-15 19:43:53.882664] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:03.157 [2024-07-15 19:43:53.904596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.157 [2024-07-15 19:43:53.904672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:03.157 [2024-07-15 19:43:53.904697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.927 ms 00:22:03.157 [2024-07-15 19:43:53.904709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.157 [2024-07-15 19:43:53.904908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.157 [2024-07-15 19:43:53.904925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:03.157 [2024-07-15 19:43:53.904937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:03.157 [2024-07-15 19:43:53.904948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.912707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.912761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.158 [2024-07-15 19:43:53.912774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.706 ms 00:22:03.158 [2024-07-15 19:43:53.912793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.912921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.912939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.158 [2024-07-15 19:43:53.912950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:03.158 [2024-07-15 19:43:53.912960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.912997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.913008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.158 [2024-07-15 19:43:53.913020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:03.158 [2024-07-15 19:43:53.913033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.913059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:03.158 [2024-07-15 19:43:53.918515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.918557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.158 [2024-07-15 19:43:53.918570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.462 ms 00:22:03.158 [2024-07-15 19:43:53.918580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.918669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.918682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.158 [2024-07-15 19:43:53.918693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:03.158 [2024-07-15 19:43:53.918703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.918727] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:03.158 [2024-07-15 19:43:53.918752] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:03.158 [2024-07-15 19:43:53.918806] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:03.158 [2024-07-15 19:43:53.918826] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:03.158 [2024-07-15 19:43:53.918914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.158 [2024-07-15 19:43:53.918928] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.158 [2024-07-15 19:43:53.918942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:03.158 [2024-07-15 19:43:53.918955] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.158 [2024-07-15 19:43:53.918967] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.158 [2024-07-15 19:43:53.918978] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:03.158 [2024-07-15 19:43:53.918991] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.158 [2024-07-15 19:43:53.919002] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.158 [2024-07-15 19:43:53.919012] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.158 [2024-07-15 19:43:53.919023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.919033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.158 [2024-07-15 19:43:53.919044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:22:03.158 [2024-07-15 19:43:53.919053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.919128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-07-15 19:43:53.919139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.158 [2024-07-15 19:43:53.919149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:03.158 [2024-07-15 19:43:53.919162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-07-15 19:43:53.919250] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.158 [2024-07-15 19:43:53.919262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.158 [2024-07-15 19:43:53.919272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.158 [2024-07-15 19:43:53.919303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.158 [2024-07-15 19:43:53.919331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.158 [2024-07-15 19:43:53.919349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.158 [2024-07-15 19:43:53.919360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:03.158 [2024-07-15 19:43:53.919371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.158 [2024-07-15 19:43:53.919380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.158 [2024-07-15 19:43:53.919390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:03.158 [2024-07-15 19:43:53.919400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.158 [2024-07-15 19:43:53.919419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.158 [2024-07-15 19:43:53.919460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.158 [2024-07-15 19:43:53.919488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.158 [2024-07-15 19:43:53.919515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.158 [2024-07-15 19:43:53.919542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.158 [2024-07-15 19:43:53.919569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.158 [2024-07-15 19:43:53.919587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.158 [2024-07-15 19:43:53.919596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:03.158 [2024-07-15 19:43:53.919605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.158 [2024-07-15 19:43:53.919613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.158 [2024-07-15 19:43:53.919622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:03.158 [2024-07-15 19:43:53.919631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.158 [2024-07-15 19:43:53.919650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:03.158 [2024-07-15 19:43:53.919659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919668] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.158 [2024-07-15 19:43:53.919679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.158 [2024-07-15 19:43:53.919688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.158 [2024-07-15 19:43:53.919698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.158 [2024-07-15 19:43:53.919707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.158 [2024-07-15 19:43:53.919717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.158 [2024-07-15 19:43:53.919726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.159 [2024-07-15 19:43:53.919736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.159 [2024-07-15 19:43:53.919748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.159 [2024-07-15 19:43:53.919757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.159 [2024-07-15 19:43:53.919767] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.159 [2024-07-15 19:43:53.919793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:03.159 [2024-07-15 19:43:53.919816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:03.159 [2024-07-15 19:43:53.919827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:03.159 [2024-07-15 19:43:53.919837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:03.159 [2024-07-15 19:43:53.919847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:03.159 [2024-07-15 19:43:53.919857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:03.159 [2024-07-15 19:43:53.919867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:03.159 [2024-07-15 19:43:53.919877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:03.159 [2024-07-15 19:43:53.919887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:03.159 [2024-07-15 19:43:53.919897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:03.159 [2024-07-15 19:43:53.919948] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.159 [2024-07-15 19:43:53.919959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.159 [2024-07-15 19:43:53.919980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.159 [2024-07-15 19:43:53.919990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.159 [2024-07-15 19:43:53.920000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.159 [2024-07-15 19:43:53.920011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.159 [2024-07-15 19:43:53.920021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.159 [2024-07-15 19:43:53.920031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:22:03.159 [2024-07-15 19:43:53.920042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:53.977486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:53.977562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.418 [2024-07-15 19:43:53.977578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.382 ms 00:22:03.418 [2024-07-15 19:43:53.977589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:53.977764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:53.977786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.418 [2024-07-15 19:43:53.977799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:03.418 [2024-07-15 19:43:53.977813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.028567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.028611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.418 [2024-07-15 19:43:54.028625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.725 ms 00:22:03.418 [2024-07-15 19:43:54.028634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.028751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.028764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.418 [2024-07-15 19:43:54.028775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:03.418 [2024-07-15 19:43:54.028785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.029236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.029250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.418 [2024-07-15 19:43:54.029261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:22:03.418 [2024-07-15 19:43:54.029271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.029392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.029408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.418 [2024-07-15 19:43:54.029419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:03.418 [2024-07-15 19:43:54.029429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.049947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.049991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.418 [2024-07-15 19:43:54.050004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.493 ms 00:22:03.418 [2024-07-15 19:43:54.050014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.070470] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:03.418 [2024-07-15 19:43:54.070510] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:03.418 [2024-07-15 19:43:54.070526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.070537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:03.418 [2024-07-15 19:43:54.070549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.379 ms 00:22:03.418 [2024-07-15 19:43:54.070559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.101500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.101539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:03.418 [2024-07-15 19:43:54.101553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.858 ms 00:22:03.418 [2024-07-15 19:43:54.101563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.122946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.122991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:03.418 [2024-07-15 19:43:54.123034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.299 ms 00:22:03.418 [2024-07-15 19:43:54.123045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.144559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.144600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:03.418 [2024-07-15 19:43:54.144614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.429 ms 00:22:03.418 [2024-07-15 19:43:54.144623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.418 [2024-07-15 19:43:54.145538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.418 [2024-07-15 19:43:54.145567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.418 [2024-07-15 19:43:54.145580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:22:03.418 [2024-07-15 19:43:54.145591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.240888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.240957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:03.678 [2024-07-15 19:43:54.240973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.268 ms 00:22:03.678 [2024-07-15 19:43:54.240983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.254781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:03.678 [2024-07-15 19:43:54.271955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.272017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.678 [2024-07-15 19:43:54.272034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.846 ms 00:22:03.678 [2024-07-15 19:43:54.272045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.272174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.272187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:03.678 [2024-07-15 19:43:54.272202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:03.678 [2024-07-15 19:43:54.272212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.272274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.272285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.678 [2024-07-15 19:43:54.272296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:03.678 [2024-07-15 19:43:54.272306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.272329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.272340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.678 [2024-07-15 19:43:54.272350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:03.678 [2024-07-15 19:43:54.272364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.272399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:03.678 [2024-07-15 19:43:54.272411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.272421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:03.678 [2024-07-15 19:43:54.272432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:03.678 [2024-07-15 19:43:54.272441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.312914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.312974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.678 [2024-07-15 19:43:54.312996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.448 ms 00:22:03.678 [2024-07-15 19:43:54.313007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.313162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.678 [2024-07-15 19:43:54.313176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.678 [2024-07-15 19:43:54.313188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:03.678 [2024-07-15 19:43:54.313198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.678 [2024-07-15 19:43:54.314267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.678 [2024-07-15 19:43:54.319860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 437.723 ms, result 0 00:22:03.678 [2024-07-15 19:43:54.320491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.678 [2024-07-15 19:43:54.339979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.390  Copying: 32/256 [MB] (32 MBps) Copying: 61/256 [MB] (29 MBps) Copying: 89/256 [MB] (28 MBps) Copying: 118/256 [MB] (28 MBps) Copying: 148/256 [MB] (29 MBps) Copying: 176/256 [MB] (28 MBps) Copying: 205/256 [MB] (29 MBps) Copying: 233/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-07-15 19:44:03.144627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.390 [2024-07-15 19:44:03.160235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.390 [2024-07-15 19:44:03.160282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:12.390 [2024-07-15 19:44:03.160314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:12.390 [2024-07-15 19:44:03.160325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.390 [2024-07-15 19:44:03.160348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:12.390 [2024-07-15 19:44:03.164303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.390 [2024-07-15 19:44:03.164341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:12.390 [2024-07-15 19:44:03.164369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:22:12.390 [2024-07-15 19:44:03.164379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.390 [2024-07-15 19:44:03.164602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.390 [2024-07-15 19:44:03.164614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:12.390 [2024-07-15 19:44:03.164624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:22:12.390 [2024-07-15 19:44:03.164634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.390 [2024-07-15 19:44:03.167589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.390 [2024-07-15 19:44:03.167614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:12.390 [2024-07-15 19:44:03.167626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:22:12.390 [2024-07-15 19:44:03.167655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.390 [2024-07-15 19:44:03.173397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.390 [2024-07-15 19:44:03.173432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:12.390 [2024-07-15 19:44:03.173443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.722 ms 00:22:12.390 [2024-07-15 19:44:03.173453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.649 [2024-07-15 19:44:03.212803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.649 [2024-07-15 19:44:03.212842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.649 [2024-07-15 19:44:03.212871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.285 ms 00:22:12.649 [2024-07-15 19:44:03.212881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.649 [2024-07-15 19:44:03.235060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.649 [2024-07-15 19:44:03.235101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.649 [2024-07-15 19:44:03.235115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.121 ms 00:22:12.649 [2024-07-15 19:44:03.235125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.649 [2024-07-15 19:44:03.235271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.649 [2024-07-15 19:44:03.235285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.649 [2024-07-15 19:44:03.235296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:12.649 [2024-07-15 19:44:03.235306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.649 [2024-07-15 19:44:03.275153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.649 [2024-07-15 19:44:03.275196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:12.649 [2024-07-15 19:44:03.275226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.828 ms 00:22:12.649 [2024-07-15 19:44:03.275238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.650 [2024-07-15 19:44:03.316363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.650 [2024-07-15 19:44:03.316404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:12.650 [2024-07-15 19:44:03.316417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.064 ms 00:22:12.650 [2024-07-15 19:44:03.316426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.650 [2024-07-15 19:44:03.358320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.650 [2024-07-15 19:44:03.358367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:12.650 [2024-07-15 19:44:03.358389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.833 ms 00:22:12.650 [2024-07-15 19:44:03.358400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.650 [2024-07-15 19:44:03.398956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.650 [2024-07-15 19:44:03.399019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:12.650 [2024-07-15 19:44:03.399034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.460 ms 00:22:12.650 [2024-07-15 19:44:03.399046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.650 [2024-07-15 19:44:03.399108] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:12.650 [2024-07-15 19:44:03.399129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.399990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:12.650 [2024-07-15 19:44:03.400191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:12.651 [2024-07-15 19:44:03.400410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:12.651 [2024-07-15 19:44:03.400422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:22:12.651 [2024-07-15 19:44:03.400434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:12.651 [2024-07-15 19:44:03.400445] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:12.651 [2024-07-15 19:44:03.400468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:12.651 [2024-07-15 19:44:03.400480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:12.651 [2024-07-15 19:44:03.400491] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:12.651 [2024-07-15 19:44:03.400503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:12.651 [2024-07-15 19:44:03.400514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:12.651 [2024-07-15 19:44:03.400525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:12.651 [2024-07-15 19:44:03.400535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:12.651 [2024-07-15 19:44:03.400546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.651 [2024-07-15 19:44:03.400557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:12.651 [2024-07-15 19:44:03.400570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:22:12.651 [2024-07-15 19:44:03.400585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.651 [2024-07-15 19:44:03.422950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.651 [2024-07-15 19:44:03.422999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:12.651 [2024-07-15 19:44:03.423029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.340 ms 00:22:12.651 [2024-07-15 19:44:03.423040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.651 [2024-07-15 19:44:03.423564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.651 [2024-07-15 19:44:03.423586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:12.651 [2024-07-15 19:44:03.423607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:12.651 [2024-07-15 19:44:03.423618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.909 [2024-07-15 19:44:03.474085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.909 [2024-07-15 19:44:03.474152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.909 [2024-07-15 19:44:03.474167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.909 [2024-07-15 19:44:03.474179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.910 [2024-07-15 19:44:03.474265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.910 [2024-07-15 19:44:03.474278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.910 [2024-07-15 19:44:03.474296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.910 [2024-07-15 19:44:03.474306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.910 [2024-07-15 19:44:03.474359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.910 [2024-07-15 19:44:03.474373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.910 [2024-07-15 19:44:03.474393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.910 [2024-07-15 19:44:03.474403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.910 [2024-07-15 19:44:03.474424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.910 [2024-07-15 19:44:03.474435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.910 [2024-07-15 19:44:03.474454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.910 [2024-07-15 19:44:03.474469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.910 [2024-07-15 19:44:03.599852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.910 [2024-07-15 19:44:03.599920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.910 [2024-07-15 19:44:03.599934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.910 [2024-07-15 19:44:03.599945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.703889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.703974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.169 [2024-07-15 19:44:03.703989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.704088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.169 [2024-07-15 19:44:03.704098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.704149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.169 [2024-07-15 19:44:03.704159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.704294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.169 [2024-07-15 19:44:03.704304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.704363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:13.169 [2024-07-15 19:44:03.704374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.704437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.169 [2024-07-15 19:44:03.704448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.169 [2024-07-15 19:44:03.704515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.169 [2024-07-15 19:44:03.704525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.169 [2024-07-15 19:44:03.704535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.169 [2024-07-15 19:44:03.704674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.430 ms, result 0 00:22:14.544 00:22:14.544 00:22:14.544 19:44:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:14.544 19:44:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:14.803 19:44:05 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:14.803 [2024-07-15 19:44:05.570593] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:14.803 [2024-07-15 19:44:05.570731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81801 ] 00:22:15.062 [2024-07-15 19:44:05.734972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.321 [2024-07-15 19:44:05.969469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.893 [2024-07-15 19:44:06.424927] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.893 [2024-07-15 19:44:06.425008] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.893 [2024-07-15 19:44:06.598154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.598255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.893 [2024-07-15 19:44:06.598296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:15.893 [2024-07-15 19:44:06.598326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.893 [2024-07-15 19:44:06.605849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.605928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.893 [2024-07-15 19:44:06.605957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.439 ms 00:22:15.893 [2024-07-15 19:44:06.605981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.893 [2024-07-15 19:44:06.606428] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.893 [2024-07-15 19:44:06.607864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.893 [2024-07-15 19:44:06.607904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.607917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.893 [2024-07-15 19:44:06.607930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:22:15.893 [2024-07-15 19:44:06.607971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.893 [2024-07-15 19:44:06.609608] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:15.893 [2024-07-15 19:44:06.633677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.633730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:15.893 [2024-07-15 19:44:06.633755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.068 ms 00:22:15.893 [2024-07-15 19:44:06.633768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.893 [2024-07-15 19:44:06.633931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.633950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:15.893 [2024-07-15 19:44:06.633963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:15.893 [2024-07-15 19:44:06.633975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.893 [2024-07-15 19:44:06.641753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.641820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.893 [2024-07-15 19:44:06.641837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.721 ms 00:22:15.893 [2024-07-15 19:44:06.641850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.893 [2024-07-15 19:44:06.641995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.893 [2024-07-15 19:44:06.642013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.893 [2024-07-15 19:44:06.642026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:15.893 [2024-07-15 19:44:06.642038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.894 [2024-07-15 19:44:06.642088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.894 [2024-07-15 19:44:06.642105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.894 [2024-07-15 19:44:06.642121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:15.894 [2024-07-15 19:44:06.642140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.894 [2024-07-15 19:44:06.642175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:15.894 [2024-07-15 19:44:06.648629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.894 [2024-07-15 19:44:06.648671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.894 [2024-07-15 19:44:06.648686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.464 ms 00:22:15.894 [2024-07-15 19:44:06.648698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.894 [2024-07-15 19:44:06.648809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.894 [2024-07-15 19:44:06.648825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.894 [2024-07-15 19:44:06.648839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:15.894 [2024-07-15 19:44:06.648850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.894 [2024-07-15 19:44:06.648879] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:15.894 [2024-07-15 19:44:06.648907] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:15.894 [2024-07-15 19:44:06.648953] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:15.894 [2024-07-15 19:44:06.648974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:15.894 [2024-07-15 19:44:06.649075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.894 [2024-07-15 19:44:06.649091] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.894 [2024-07-15 19:44:06.649107] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:15.894 [2024-07-15 19:44:06.649122] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649137] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649158] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:15.894 [2024-07-15 19:44:06.649174] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.894 [2024-07-15 19:44:06.649186] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.894 [2024-07-15 19:44:06.649197] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.894 [2024-07-15 19:44:06.649210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.894 [2024-07-15 19:44:06.649222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.894 [2024-07-15 19:44:06.649234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:22:15.894 [2024-07-15 19:44:06.649246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.894 [2024-07-15 19:44:06.649341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.894 [2024-07-15 19:44:06.649354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.894 [2024-07-15 19:44:06.649366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:15.894 [2024-07-15 19:44:06.649383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.894 [2024-07-15 19:44:06.649489] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.894 [2024-07-15 19:44:06.649509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.894 [2024-07-15 19:44:06.649522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.894 [2024-07-15 19:44:06.649558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.894 [2024-07-15 19:44:06.649592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.894 [2024-07-15 19:44:06.649614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.894 [2024-07-15 19:44:06.649625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:15.894 [2024-07-15 19:44:06.649635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.894 [2024-07-15 19:44:06.649647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.894 [2024-07-15 19:44:06.649659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:15.894 [2024-07-15 19:44:06.649671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.894 [2024-07-15 19:44:06.649693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.894 [2024-07-15 19:44:06.649740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.894 [2024-07-15 19:44:06.649794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.894 [2024-07-15 19:44:06.649828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.894 [2024-07-15 19:44:06.649862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.894 [2024-07-15 19:44:06.649884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.894 [2024-07-15 19:44:06.649895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.894 [2024-07-15 19:44:06.649916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.894 [2024-07-15 19:44:06.649927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:15.894 [2024-07-15 19:44:06.649938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.894 [2024-07-15 19:44:06.649950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.894 [2024-07-15 19:44:06.649961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:15.894 [2024-07-15 19:44:06.649971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.649982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.894 [2024-07-15 19:44:06.649993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:15.894 [2024-07-15 19:44:06.650004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.650014] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.894 [2024-07-15 19:44:06.650026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.894 [2024-07-15 19:44:06.650040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.894 [2024-07-15 19:44:06.650058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.894 [2024-07-15 19:44:06.650075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.894 [2024-07-15 19:44:06.650090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.894 [2024-07-15 19:44:06.650104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.894 [2024-07-15 19:44:06.650119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.894 [2024-07-15 19:44:06.650133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.894 [2024-07-15 19:44:06.650148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.894 [2024-07-15 19:44:06.650164] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.894 [2024-07-15 19:44:06.650188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.894 [2024-07-15 19:44:06.650205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:15.894 [2024-07-15 19:44:06.650222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:15.894 [2024-07-15 19:44:06.650238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:15.894 [2024-07-15 19:44:06.650255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:15.894 [2024-07-15 19:44:06.650271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:15.894 [2024-07-15 19:44:06.650287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:15.894 [2024-07-15 19:44:06.650303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:15.894 [2024-07-15 19:44:06.650320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:15.894 [2024-07-15 19:44:06.650336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:15.894 [2024-07-15 19:44:06.650352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:15.894 [2024-07-15 19:44:06.650368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:15.894 [2024-07-15 19:44:06.650395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:15.894 [2024-07-15 19:44:06.650409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:15.894 [2024-07-15 19:44:06.650422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:15.894 [2024-07-15 19:44:06.650434] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.894 [2024-07-15 19:44:06.650448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.895 [2024-07-15 19:44:06.650461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.895 [2024-07-15 19:44:06.650474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.895 [2024-07-15 19:44:06.650487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.895 [2024-07-15 19:44:06.650500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.895 [2024-07-15 19:44:06.650512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.895 [2024-07-15 19:44:06.650528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.895 [2024-07-15 19:44:06.650541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:22:15.895 [2024-07-15 19:44:06.650553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.712157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.712373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.154 [2024-07-15 19:44:06.712489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.532 ms 00:22:16.154 [2024-07-15 19:44:06.712528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.712720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.712856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.154 [2024-07-15 19:44:06.712937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:16.154 [2024-07-15 19:44:06.712991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.773429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.773651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.154 [2024-07-15 19:44:06.773745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.351 ms 00:22:16.154 [2024-07-15 19:44:06.773807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.773956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.774008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.154 [2024-07-15 19:44:06.774053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.154 [2024-07-15 19:44:06.774164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.774703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.774757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.154 [2024-07-15 19:44:06.774863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:22:16.154 [2024-07-15 19:44:06.774908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.775091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.775212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.154 [2024-07-15 19:44:06.775262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:22:16.154 [2024-07-15 19:44:06.775298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.800409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.800592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.154 [2024-07-15 19:44:06.800692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.000 ms 00:22:16.154 [2024-07-15 19:44:06.800737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.824710] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:16.154 [2024-07-15 19:44:06.824925] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:16.154 [2024-07-15 19:44:06.825048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.825089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:16.154 [2024-07-15 19:44:06.825127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.099 ms 00:22:16.154 [2024-07-15 19:44:06.825162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.861373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.861589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:16.154 [2024-07-15 19:44:06.861679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.089 ms 00:22:16.154 [2024-07-15 19:44:06.861723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.885003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.885069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:16.154 [2024-07-15 19:44:06.885088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.113 ms 00:22:16.154 [2024-07-15 19:44:06.885100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.907973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.908038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:16.154 [2024-07-15 19:44:06.908055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.765 ms 00:22:16.154 [2024-07-15 19:44:06.908068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.154 [2024-07-15 19:44:06.909124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.154 [2024-07-15 19:44:06.909177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.154 [2024-07-15 19:44:06.909198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:22:16.154 [2024-07-15 19:44:06.909216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.012885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.012967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:16.413 [2024-07-15 19:44:07.012990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.620 ms 00:22:16.413 [2024-07-15 19:44:07.013006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.028450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:16.413 [2024-07-15 19:44:07.047188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.047260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.413 [2024-07-15 19:44:07.047279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.013 ms 00:22:16.413 [2024-07-15 19:44:07.047291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.047433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.047453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:16.413 [2024-07-15 19:44:07.047471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:16.413 [2024-07-15 19:44:07.047494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.047551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.047563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.413 [2024-07-15 19:44:07.047575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:16.413 [2024-07-15 19:44:07.047585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.047623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.047635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.413 [2024-07-15 19:44:07.047646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:16.413 [2024-07-15 19:44:07.047677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.047733] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:16.413 [2024-07-15 19:44:07.047747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.047759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:16.413 [2024-07-15 19:44:07.047771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:16.413 [2024-07-15 19:44:07.047782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.094182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.094242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:16.413 [2024-07-15 19:44:07.094268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.372 ms 00:22:16.413 [2024-07-15 19:44:07.094280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.094433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.413 [2024-07-15 19:44:07.094451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:16.413 [2024-07-15 19:44:07.094464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:16.413 [2024-07-15 19:44:07.094476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.413 [2024-07-15 19:44:07.095532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.413 [2024-07-15 19:44:07.101564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 497.087 ms, result 0 00:22:16.413 [2024-07-15 19:44:07.102414] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:16.413 [2024-07-15 19:44:07.124232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.672  Copying: 4096/4096 [kB] (average 30 MBps)[2024-07-15 19:44:07.263903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:16.672 [2024-07-15 19:44:07.282101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.282152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:16.672 [2024-07-15 19:44:07.282170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:16.672 [2024-07-15 19:44:07.282182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.282210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:16.672 [2024-07-15 19:44:07.286797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.286837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:16.672 [2024-07-15 19:44:07.286852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.567 ms 00:22:16.672 [2024-07-15 19:44:07.286864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.288740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.288789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:16.672 [2024-07-15 19:44:07.288806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:22:16.672 [2024-07-15 19:44:07.288818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.292571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.292602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:16.672 [2024-07-15 19:44:07.292616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:22:16.672 [2024-07-15 19:44:07.292635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.299615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.299648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:16.672 [2024-07-15 19:44:07.299661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:22:16.672 [2024-07-15 19:44:07.299674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.348709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.348764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:16.672 [2024-07-15 19:44:07.348799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.979 ms 00:22:16.672 [2024-07-15 19:44:07.348811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.376094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.376144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:16.672 [2024-07-15 19:44:07.376161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.199 ms 00:22:16.672 [2024-07-15 19:44:07.376173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.376356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.376373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:16.672 [2024-07-15 19:44:07.376387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:16.672 [2024-07-15 19:44:07.376399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.672 [2024-07-15 19:44:07.422149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.672 [2024-07-15 19:44:07.422212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:16.672 [2024-07-15 19:44:07.422233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.724 ms 00:22:16.672 [2024-07-15 19:44:07.422248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.933 [2024-07-15 19:44:07.471946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.933 [2024-07-15 19:44:07.472027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:16.933 [2024-07-15 19:44:07.472045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.602 ms 00:22:16.933 [2024-07-15 19:44:07.472058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.933 [2024-07-15 19:44:07.518461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.933 [2024-07-15 19:44:07.518519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:16.933 [2024-07-15 19:44:07.518536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.315 ms 00:22:16.933 [2024-07-15 19:44:07.518548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.933 [2024-07-15 19:44:07.564735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.933 [2024-07-15 19:44:07.564794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:16.933 [2024-07-15 19:44:07.564811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.075 ms 00:22:16.933 [2024-07-15 19:44:07.564823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.933 [2024-07-15 19:44:07.564892] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:16.933 [2024-07-15 19:44:07.564913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.564937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.564951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.564964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.564977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.564990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:16.933 [2024-07-15 19:44:07.565158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.565991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:16.934 [2024-07-15 19:44:07.566281] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:16.934 [2024-07-15 19:44:07.566293] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:22:16.934 [2024-07-15 19:44:07.566306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:16.934 [2024-07-15 19:44:07.566318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:16.934 [2024-07-15 19:44:07.566343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:16.934 [2024-07-15 19:44:07.566356] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:16.934 [2024-07-15 19:44:07.566367] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:16.934 [2024-07-15 19:44:07.566389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:16.934 [2024-07-15 19:44:07.566401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:16.934 [2024-07-15 19:44:07.566412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:16.934 [2024-07-15 19:44:07.566422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:16.934 [2024-07-15 19:44:07.566434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.934 [2024-07-15 19:44:07.566446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:16.934 [2024-07-15 19:44:07.566459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:22:16.934 [2024-07-15 19:44:07.566476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.934 [2024-07-15 19:44:07.591519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.935 [2024-07-15 19:44:07.591561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:16.935 [2024-07-15 19:44:07.591577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.016 ms 00:22:16.935 [2024-07-15 19:44:07.591589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.935 [2024-07-15 19:44:07.592263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.935 [2024-07-15 19:44:07.592285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:16.935 [2024-07-15 19:44:07.592304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:22:16.935 [2024-07-15 19:44:07.592316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.935 [2024-07-15 19:44:07.650016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.935 [2024-07-15 19:44:07.650068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.935 [2024-07-15 19:44:07.650085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.935 [2024-07-15 19:44:07.650099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.935 [2024-07-15 19:44:07.650231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.935 [2024-07-15 19:44:07.650248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.935 [2024-07-15 19:44:07.650271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.935 [2024-07-15 19:44:07.650287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.935 [2024-07-15 19:44:07.650363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.935 [2024-07-15 19:44:07.650393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.935 [2024-07-15 19:44:07.650411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.935 [2024-07-15 19:44:07.650423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.935 [2024-07-15 19:44:07.650451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.935 [2024-07-15 19:44:07.650464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.935 [2024-07-15 19:44:07.650476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.935 [2024-07-15 19:44:07.650494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.211 [2024-07-15 19:44:07.796179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.211 [2024-07-15 19:44:07.796237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.212 [2024-07-15 19:44:07.796255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.796267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.212 [2024-07-15 19:44:07.919284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.212 [2024-07-15 19:44:07.919406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.212 [2024-07-15 19:44:07.919476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.212 [2024-07-15 19:44:07.919653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:17.212 [2024-07-15 19:44:07.919746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.212 [2024-07-15 19:44:07.919882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.919945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.212 [2024-07-15 19:44:07.919959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.212 [2024-07-15 19:44:07.919971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.212 [2024-07-15 19:44:07.919983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.212 [2024-07-15 19:44:07.920136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 638.031 ms, result 0 00:22:19.143 00:22:19.143 00:22:19.143 19:44:09 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81843 00:22:19.143 19:44:09 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:19.143 19:44:09 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81843 00:22:19.143 19:44:09 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81843 ']' 00:22:19.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.143 19:44:09 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.143 19:44:09 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.143 19:44:09 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.143 19:44:09 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.143 19:44:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:19.143 [2024-07-15 19:44:09.598392] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:19.143 [2024-07-15 19:44:09.598552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81843 ] 00:22:19.143 [2024-07-15 19:44:09.770467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.401 [2024-07-15 19:44:10.121683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.777 19:44:11 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.777 19:44:11 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:22:20.777 19:44:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:20.778 [2024-07-15 19:44:11.512626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:20.778 [2024-07-15 19:44:11.512694] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.037 [2024-07-15 19:44:11.692812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.692882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.037 [2024-07-15 19:44:11.692914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:21.037 [2024-07-15 19:44:11.692928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.696256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.696298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.037 [2024-07-15 19:44:11.696310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:22:21.037 [2024-07-15 19:44:11.696323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.696454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.037 [2024-07-15 19:44:11.697672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.037 [2024-07-15 19:44:11.697705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.697719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.037 [2024-07-15 19:44:11.697730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:22:21.037 [2024-07-15 19:44:11.697743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.699229] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:21.037 [2024-07-15 19:44:11.720287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.720328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:21.037 [2024-07-15 19:44:11.720345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.051 ms 00:22:21.037 [2024-07-15 19:44:11.720355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.720473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.720488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:21.037 [2024-07-15 19:44:11.720502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:21.037 [2024-07-15 19:44:11.720512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.727949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.727994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.037 [2024-07-15 19:44:11.728016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.378 ms 00:22:21.037 [2024-07-15 19:44:11.728026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.728178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.728193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.037 [2024-07-15 19:44:11.728207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:21.037 [2024-07-15 19:44:11.728217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.728256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.728267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.037 [2024-07-15 19:44:11.728279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:21.037 [2024-07-15 19:44:11.728289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.728318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:21.037 [2024-07-15 19:44:11.734290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.734349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.037 [2024-07-15 19:44:11.734363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.980 ms 00:22:21.037 [2024-07-15 19:44:11.734375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.734467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.734485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.037 [2024-07-15 19:44:11.734496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:21.037 [2024-07-15 19:44:11.734512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.734535] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:21.037 [2024-07-15 19:44:11.734560] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:21.037 [2024-07-15 19:44:11.734602] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:21.037 [2024-07-15 19:44:11.734625] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:21.037 [2024-07-15 19:44:11.734712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:21.037 [2024-07-15 19:44:11.734730] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.037 [2024-07-15 19:44:11.734745] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:21.037 [2024-07-15 19:44:11.734761] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.037 [2024-07-15 19:44:11.734774] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.037 [2024-07-15 19:44:11.734805] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:21.037 [2024-07-15 19:44:11.734815] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.037 [2024-07-15 19:44:11.734827] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:21.037 [2024-07-15 19:44:11.734837] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:21.037 [2024-07-15 19:44:11.734853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.734863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.037 [2024-07-15 19:44:11.734875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:22:21.037 [2024-07-15 19:44:11.734885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.734965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-07-15 19:44:11.734975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.037 [2024-07-15 19:44:11.734988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:21.037 [2024-07-15 19:44:11.734997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-07-15 19:44:11.735094] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.037 [2024-07-15 19:44:11.735109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.037 [2024-07-15 19:44:11.735122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.037 [2024-07-15 19:44:11.735132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.037 [2024-07-15 19:44:11.735145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.037 [2024-07-15 19:44:11.735154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.038 [2024-07-15 19:44:11.735191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.038 [2024-07-15 19:44:11.735212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.038 [2024-07-15 19:44:11.735221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:21.038 [2024-07-15 19:44:11.735235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.038 [2024-07-15 19:44:11.735244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.038 [2024-07-15 19:44:11.735256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:21.038 [2024-07-15 19:44:11.735265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.038 [2024-07-15 19:44:11.735286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.038 [2024-07-15 19:44:11.735318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.038 [2024-07-15 19:44:11.735347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.038 [2024-07-15 19:44:11.735383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.038 [2024-07-15 19:44:11.735422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.038 [2024-07-15 19:44:11.735455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.038 [2024-07-15 19:44:11.735476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.038 [2024-07-15 19:44:11.735485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:21.038 [2024-07-15 19:44:11.735496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.038 [2024-07-15 19:44:11.735505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:21.038 [2024-07-15 19:44:11.735517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:21.038 [2024-07-15 19:44:11.735525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:21.038 [2024-07-15 19:44:11.735548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:21.038 [2024-07-15 19:44:11.735559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735568] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.038 [2024-07-15 19:44:11.735583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.038 [2024-07-15 19:44:11.735593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-07-15 19:44:11.735615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.038 [2024-07-15 19:44:11.735627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.038 [2024-07-15 19:44:11.735637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.038 [2024-07-15 19:44:11.735648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.038 [2024-07-15 19:44:11.735657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.038 [2024-07-15 19:44:11.735669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.038 [2024-07-15 19:44:11.735680] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.038 [2024-07-15 19:44:11.735694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:21.038 [2024-07-15 19:44:11.735723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:21.038 [2024-07-15 19:44:11.735733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:21.038 [2024-07-15 19:44:11.735746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:21.038 [2024-07-15 19:44:11.735756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:21.038 [2024-07-15 19:44:11.735769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:21.038 [2024-07-15 19:44:11.735790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:21.038 [2024-07-15 19:44:11.735803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:21.038 [2024-07-15 19:44:11.735813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:21.038 [2024-07-15 19:44:11.735826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:21.038 [2024-07-15 19:44:11.735883] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.038 [2024-07-15 19:44:11.735896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.038 [2024-07-15 19:44:11.735922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.038 [2024-07-15 19:44:11.735933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.038 [2024-07-15 19:44:11.735945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.038 [2024-07-15 19:44:11.735956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.038 [2024-07-15 19:44:11.735969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.038 [2024-07-15 19:44:11.735979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:22:21.038 [2024-07-15 19:44:11.735991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.038 [2024-07-15 19:44:11.779480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.038 [2024-07-15 19:44:11.779533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.038 [2024-07-15 19:44:11.779549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.422 ms 00:22:21.038 [2024-07-15 19:44:11.779565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.038 [2024-07-15 19:44:11.779707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.038 [2024-07-15 19:44:11.779723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:21.038 [2024-07-15 19:44:11.779734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:21.038 [2024-07-15 19:44:11.779747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.834237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.834297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.298 [2024-07-15 19:44:11.834312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.466 ms 00:22:21.298 [2024-07-15 19:44:11.834325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.834447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.834464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.298 [2024-07-15 19:44:11.834476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.298 [2024-07-15 19:44:11.834489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.834955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.834974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.298 [2024-07-15 19:44:11.834991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:22:21.298 [2024-07-15 19:44:11.835005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.835130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.835148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.298 [2024-07-15 19:44:11.835159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:21.298 [2024-07-15 19:44:11.835172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.859515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.859573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.298 [2024-07-15 19:44:11.859596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.318 ms 00:22:21.298 [2024-07-15 19:44:11.859610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.880840] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:21.298 [2024-07-15 19:44:11.880884] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:21.298 [2024-07-15 19:44:11.880915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.880928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:21.298 [2024-07-15 19:44:11.880940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.151 ms 00:22:21.298 [2024-07-15 19:44:11.880952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.912490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.912538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:21.298 [2024-07-15 19:44:11.912553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.458 ms 00:22:21.298 [2024-07-15 19:44:11.912565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.932480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.932526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:21.298 [2024-07-15 19:44:11.932550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.834 ms 00:22:21.298 [2024-07-15 19:44:11.932636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.953140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.953183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:21.298 [2024-07-15 19:44:11.953196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.427 ms 00:22:21.298 [2024-07-15 19:44:11.953208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:11.954124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:11.954158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:21.298 [2024-07-15 19:44:11.954171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:22:21.298 [2024-07-15 19:44:11.954185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:12.057072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:12.057149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:21.298 [2024-07-15 19:44:12.057167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.857 ms 00:22:21.298 [2024-07-15 19:44:12.057181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:12.070478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:21.298 [2024-07-15 19:44:12.087095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:12.087154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.298 [2024-07-15 19:44:12.087191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.758 ms 00:22:21.298 [2024-07-15 19:44:12.087206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:12.087320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:12.087334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.298 [2024-07-15 19:44:12.087348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:21.298 [2024-07-15 19:44:12.087357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:12.087416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:12.087426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.298 [2024-07-15 19:44:12.087440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:21.298 [2024-07-15 19:44:12.087449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:12.087480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:12.087491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.298 [2024-07-15 19:44:12.087506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:21.298 [2024-07-15 19:44:12.087516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-07-15 19:44:12.087550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.298 [2024-07-15 19:44:12.087562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-07-15 19:44:12.087578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.298 [2024-07-15 19:44:12.087588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:21.298 [2024-07-15 19:44:12.087600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.557 [2024-07-15 19:44:12.127016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.557 [2024-07-15 19:44:12.127199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.557 [2024-07-15 19:44:12.127309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.391 ms 00:22:21.557 [2024-07-15 19:44:12.127352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.557 [2024-07-15 19:44:12.127523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.557 [2024-07-15 19:44:12.127629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.557 [2024-07-15 19:44:12.127667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:21.557 [2024-07-15 19:44:12.127699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.557 [2024-07-15 19:44:12.128696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.557 [2024-07-15 19:44:12.133980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 435.580 ms, result 0 00:22:21.557 [2024-07-15 19:44:12.135181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.557 Some configs were skipped because the RPC state that can call them passed over. 00:22:21.557 19:44:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:21.816 [2024-07-15 19:44:12.427487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.816 [2024-07-15 19:44:12.427714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:21.816 [2024-07-15 19:44:12.427851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:22:21.816 [2024-07-15 19:44:12.427943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.816 [2024-07-15 19:44:12.428027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.137 ms, result 0 00:22:21.816 true 00:22:21.816 19:44:12 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:22.170 [2024-07-15 19:44:12.707284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.170 [2024-07-15 19:44:12.707475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:22.171 [2024-07-15 19:44:12.707570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:22:22.171 [2024-07-15 19:44:12.707593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.171 [2024-07-15 19:44:12.707645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.491 ms, result 0 00:22:22.171 true 00:22:22.171 19:44:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81843 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81843 ']' 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81843 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81843 00:22:22.171 killing process with pid 81843 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81843' 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81843 00:22:22.171 19:44:12 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81843 00:22:23.556 [2024-07-15 19:44:13.937522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.937605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:23.556 [2024-07-15 19:44:13.937641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:23.556 [2024-07-15 19:44:13.937661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.937711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:23.556 [2024-07-15 19:44:13.941514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.941561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:23.556 [2024-07-15 19:44:13.941577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.772 ms 00:22:23.556 [2024-07-15 19:44:13.941595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.941883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.941903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:23.556 [2024-07-15 19:44:13.941917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:22:23.556 [2024-07-15 19:44:13.941932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.945294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.945336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:23.556 [2024-07-15 19:44:13.945353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:22:23.556 [2024-07-15 19:44:13.945369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.951209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.951252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:23.556 [2024-07-15 19:44:13.951267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.801 ms 00:22:23.556 [2024-07-15 19:44:13.951284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.967279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.967328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:23.556 [2024-07-15 19:44:13.967344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.940 ms 00:22:23.556 [2024-07-15 19:44:13.967362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.978920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.978962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:23.556 [2024-07-15 19:44:13.978978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.482 ms 00:22:23.556 [2024-07-15 19:44:13.978991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.979132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.979149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:23.556 [2024-07-15 19:44:13.979160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:23.556 [2024-07-15 19:44:13.979183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:13.996447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:13.996487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:23.556 [2024-07-15 19:44:13.996500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.244 ms 00:22:23.556 [2024-07-15 19:44:13.996512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:14.012901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:14.012939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:23.556 [2024-07-15 19:44:14.012952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.343 ms 00:22:23.556 [2024-07-15 19:44:14.012969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:14.028940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:14.028979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:23.556 [2024-07-15 19:44:14.028992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:22:23.556 [2024-07-15 19:44:14.029003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:14.046415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.556 [2024-07-15 19:44:14.046456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:23.556 [2024-07-15 19:44:14.046469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.339 ms 00:22:23.556 [2024-07-15 19:44:14.046481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.556 [2024-07-15 19:44:14.046526] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:23.556 [2024-07-15 19:44:14.046548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:23.556 [2024-07-15 19:44:14.046710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.046993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:23.557 [2024-07-15 19:44:14.047827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:23.557 [2024-07-15 19:44:14.047837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:22:23.557 [2024-07-15 19:44:14.047856] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:23.557 [2024-07-15 19:44:14.047866] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:23.557 [2024-07-15 19:44:14.047878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:23.557 [2024-07-15 19:44:14.047889] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:23.557 [2024-07-15 19:44:14.047901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:23.557 [2024-07-15 19:44:14.047911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:23.557 [2024-07-15 19:44:14.047923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:23.557 [2024-07-15 19:44:14.047932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:23.557 [2024-07-15 19:44:14.047955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:23.557 [2024-07-15 19:44:14.047966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.557 [2024-07-15 19:44:14.047978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:23.557 [2024-07-15 19:44:14.047989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:22:23.557 [2024-07-15 19:44:14.048001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.070179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.557 [2024-07-15 19:44:14.070255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:23.557 [2024-07-15 19:44:14.070276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.153 ms 00:22:23.557 [2024-07-15 19:44:14.070299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.071013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.557 [2024-07-15 19:44:14.071039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:23.557 [2024-07-15 19:44:14.071057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:22:23.557 [2024-07-15 19:44:14.071075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.141396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.557 [2024-07-15 19:44:14.141452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.557 [2024-07-15 19:44:14.141468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.557 [2024-07-15 19:44:14.141481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.141599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.557 [2024-07-15 19:44:14.141614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.557 [2024-07-15 19:44:14.141625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.557 [2024-07-15 19:44:14.141640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.141694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.557 [2024-07-15 19:44:14.141710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.557 [2024-07-15 19:44:14.141721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.557 [2024-07-15 19:44:14.141736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.141754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.557 [2024-07-15 19:44:14.141767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.557 [2024-07-15 19:44:14.141794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.557 [2024-07-15 19:44:14.141807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.557 [2024-07-15 19:44:14.272190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.557 [2024-07-15 19:44:14.272256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.557 [2024-07-15 19:44:14.272272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.557 [2024-07-15 19:44:14.272286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.386793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.386852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.815 [2024-07-15 19:44:14.386867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.386881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.386975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.386991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.815 [2024-07-15 19:44:14.387001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.387017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.387046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.387060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.815 [2024-07-15 19:44:14.387070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.387082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.387194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.387211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.815 [2024-07-15 19:44:14.387222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.387234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.387269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.387284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.815 [2024-07-15 19:44:14.387295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.387307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.387345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.387362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.815 [2024-07-15 19:44:14.387372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.387387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.387430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.815 [2024-07-15 19:44:14.387444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.815 [2024-07-15 19:44:14.387454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.815 [2024-07-15 19:44:14.387467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.815 [2024-07-15 19:44:14.387602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.078 ms, result 0 00:22:24.750 19:44:15 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:25.009 [2024-07-15 19:44:15.626469] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:25.009 [2024-07-15 19:44:15.626655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81920 ] 00:22:25.268 [2024-07-15 19:44:15.803641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.527 [2024-07-15 19:44:16.143951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.784 [2024-07-15 19:44:16.553499] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:25.784 [2024-07-15 19:44:16.553566] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:26.043 [2024-07-15 19:44:16.719410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.719483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:26.043 [2024-07-15 19:44:16.719507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:26.043 [2024-07-15 19:44:16.719525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.723331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.723402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:26.043 [2024-07-15 19:44:16.723419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:22:26.043 [2024-07-15 19:44:16.723431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.723559] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:26.043 [2024-07-15 19:44:16.724667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:26.043 [2024-07-15 19:44:16.724702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.724714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:26.043 [2024-07-15 19:44:16.724725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:22:26.043 [2024-07-15 19:44:16.724735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.726214] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:26.043 [2024-07-15 19:44:16.747112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.747155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:26.043 [2024-07-15 19:44:16.747177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.897 ms 00:22:26.043 [2024-07-15 19:44:16.747188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.747312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.747327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:26.043 [2024-07-15 19:44:16.747339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:26.043 [2024-07-15 19:44:16.747350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.754275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.754308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:26.043 [2024-07-15 19:44:16.754320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:22:26.043 [2024-07-15 19:44:16.754330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.754437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.754453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:26.043 [2024-07-15 19:44:16.754464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:26.043 [2024-07-15 19:44:16.754474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.754508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.754519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:26.043 [2024-07-15 19:44:16.754529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:26.043 [2024-07-15 19:44:16.754543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.754568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:26.043 [2024-07-15 19:44:16.760035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.760067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:26.043 [2024-07-15 19:44:16.760079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.473 ms 00:22:26.043 [2024-07-15 19:44:16.760089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.760159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.043 [2024-07-15 19:44:16.760172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:26.043 [2024-07-15 19:44:16.760183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:26.043 [2024-07-15 19:44:16.760193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.043 [2024-07-15 19:44:16.760213] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:26.043 [2024-07-15 19:44:16.760236] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:26.043 [2024-07-15 19:44:16.760275] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:26.043 [2024-07-15 19:44:16.760293] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:26.044 [2024-07-15 19:44:16.760379] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:26.044 [2024-07-15 19:44:16.760392] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:26.044 [2024-07-15 19:44:16.760406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:26.044 [2024-07-15 19:44:16.760419] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:26.044 [2024-07-15 19:44:16.760432] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:26.044 [2024-07-15 19:44:16.760444] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:26.044 [2024-07-15 19:44:16.760457] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:26.044 [2024-07-15 19:44:16.760467] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:26.044 [2024-07-15 19:44:16.760478] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:26.044 [2024-07-15 19:44:16.760489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.044 [2024-07-15 19:44:16.760500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:26.044 [2024-07-15 19:44:16.760510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:22:26.044 [2024-07-15 19:44:16.760520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.044 [2024-07-15 19:44:16.760594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.044 [2024-07-15 19:44:16.760605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:26.044 [2024-07-15 19:44:16.760616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:26.044 [2024-07-15 19:44:16.760629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.044 [2024-07-15 19:44:16.760713] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:26.044 [2024-07-15 19:44:16.760726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:26.044 [2024-07-15 19:44:16.760738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:26.044 [2024-07-15 19:44:16.760748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.760759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:26.044 [2024-07-15 19:44:16.760769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.760797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:26.044 [2024-07-15 19:44:16.760808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:26.044 [2024-07-15 19:44:16.760818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:26.044 [2024-07-15 19:44:16.760828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:26.044 [2024-07-15 19:44:16.760838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:26.044 [2024-07-15 19:44:16.760848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:26.044 [2024-07-15 19:44:16.760873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:26.044 [2024-07-15 19:44:16.760882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:26.044 [2024-07-15 19:44:16.760894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:26.044 [2024-07-15 19:44:16.760904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.760914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:26.044 [2024-07-15 19:44:16.760923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:26.044 [2024-07-15 19:44:16.760944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.760954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:26.044 [2024-07-15 19:44:16.760964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:26.044 [2024-07-15 19:44:16.760973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.044 [2024-07-15 19:44:16.760983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:26.044 [2024-07-15 19:44:16.760992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.044 [2024-07-15 19:44:16.761015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:26.044 [2024-07-15 19:44:16.761024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.044 [2024-07-15 19:44:16.761043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:26.044 [2024-07-15 19:44:16.761052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.044 [2024-07-15 19:44:16.761070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:26.044 [2024-07-15 19:44:16.761079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:26.044 [2024-07-15 19:44:16.761098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:26.044 [2024-07-15 19:44:16.761107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:26.044 [2024-07-15 19:44:16.761116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:26.044 [2024-07-15 19:44:16.761125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:26.044 [2024-07-15 19:44:16.761134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:26.044 [2024-07-15 19:44:16.761143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:26.044 [2024-07-15 19:44:16.761161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:26.044 [2024-07-15 19:44:16.761170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761179] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:26.044 [2024-07-15 19:44:16.761189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:26.044 [2024-07-15 19:44:16.761199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:26.044 [2024-07-15 19:44:16.761208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.044 [2024-07-15 19:44:16.761218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:26.044 [2024-07-15 19:44:16.761228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:26.044 [2024-07-15 19:44:16.761237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:26.044 [2024-07-15 19:44:16.761246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:26.044 [2024-07-15 19:44:16.761256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:26.044 [2024-07-15 19:44:16.761265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:26.044 [2024-07-15 19:44:16.761275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:26.044 [2024-07-15 19:44:16.761291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:26.044 [2024-07-15 19:44:16.761313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:26.044 [2024-07-15 19:44:16.761325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:26.044 [2024-07-15 19:44:16.761335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:26.044 [2024-07-15 19:44:16.761345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:26.044 [2024-07-15 19:44:16.761355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:26.044 [2024-07-15 19:44:16.761365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:26.044 [2024-07-15 19:44:16.761376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:26.044 [2024-07-15 19:44:16.761386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:26.044 [2024-07-15 19:44:16.761397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:26.044 [2024-07-15 19:44:16.761448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:26.044 [2024-07-15 19:44:16.761459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:26.044 [2024-07-15 19:44:16.761480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:26.044 [2024-07-15 19:44:16.761490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:26.044 [2024-07-15 19:44:16.761501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:26.044 [2024-07-15 19:44:16.761512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.044 [2024-07-15 19:44:16.761521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:26.044 [2024-07-15 19:44:16.761532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:22:26.044 [2024-07-15 19:44:16.761543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.044 [2024-07-15 19:44:16.821035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.044 [2024-07-15 19:44:16.821092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.044 [2024-07-15 19:44:16.821109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.433 ms 00:22:26.044 [2024-07-15 19:44:16.821121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.044 [2024-07-15 19:44:16.821296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.044 [2024-07-15 19:44:16.821311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:26.044 [2024-07-15 19:44:16.821324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:26.044 [2024-07-15 19:44:16.821340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.875651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.875706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.303 [2024-07-15 19:44:16.875722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.283 ms 00:22:26.303 [2024-07-15 19:44:16.875732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.875859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.875875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.303 [2024-07-15 19:44:16.875887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:26.303 [2024-07-15 19:44:16.875897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.876341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.876374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.303 [2024-07-15 19:44:16.876386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:22:26.303 [2024-07-15 19:44:16.876397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.876521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.876554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.303 [2024-07-15 19:44:16.876566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:26.303 [2024-07-15 19:44:16.876576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.896900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.896952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.303 [2024-07-15 19:44:16.896968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.297 ms 00:22:26.303 [2024-07-15 19:44:16.896979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.917479] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:26.303 [2024-07-15 19:44:16.917533] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:26.303 [2024-07-15 19:44:16.917558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.917577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:26.303 [2024-07-15 19:44:16.917598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.420 ms 00:22:26.303 [2024-07-15 19:44:16.917616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.951688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.951743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:26.303 [2024-07-15 19:44:16.951758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.942 ms 00:22:26.303 [2024-07-15 19:44:16.951769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.972406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.972462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:26.303 [2024-07-15 19:44:16.972479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.524 ms 00:22:26.303 [2024-07-15 19:44:16.972491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.993141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.993188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:26.303 [2024-07-15 19:44:16.993203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.547 ms 00:22:26.303 [2024-07-15 19:44:16.993213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:16.994122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:16.994154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:26.303 [2024-07-15 19:44:16.994168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:22:26.303 [2024-07-15 19:44:16.994178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.303 [2024-07-15 19:44:17.089707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.303 [2024-07-15 19:44:17.089789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:26.303 [2024-07-15 19:44:17.089807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.494 ms 00:22:26.303 [2024-07-15 19:44:17.089818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.103154] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:26.561 [2024-07-15 19:44:17.121109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.561 [2024-07-15 19:44:17.121177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:26.561 [2024-07-15 19:44:17.121199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.118 ms 00:22:26.561 [2024-07-15 19:44:17.121215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.121358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.561 [2024-07-15 19:44:17.121379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:26.561 [2024-07-15 19:44:17.121400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:26.561 [2024-07-15 19:44:17.121416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.121487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.561 [2024-07-15 19:44:17.121505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:26.561 [2024-07-15 19:44:17.121521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:26.561 [2024-07-15 19:44:17.121537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.121572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.561 [2024-07-15 19:44:17.121589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:26.561 [2024-07-15 19:44:17.121608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:26.561 [2024-07-15 19:44:17.121629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.121672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:26.561 [2024-07-15 19:44:17.121690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.561 [2024-07-15 19:44:17.121706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:26.561 [2024-07-15 19:44:17.121722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:26.561 [2024-07-15 19:44:17.121738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.161463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.561 [2024-07-15 19:44:17.161513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:26.561 [2024-07-15 19:44:17.161549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.693 ms 00:22:26.561 [2024-07-15 19:44:17.161560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.561 [2024-07-15 19:44:17.161672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.562 [2024-07-15 19:44:17.161686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:26.562 [2024-07-15 19:44:17.161697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:26.562 [2024-07-15 19:44:17.161708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.562 [2024-07-15 19:44:17.162607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:26.562 [2024-07-15 19:44:17.167674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 442.921 ms, result 0 00:22:26.562 [2024-07-15 19:44:17.168455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:26.562 [2024-07-15 19:44:17.188239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:35.725  Copying: 31/256 [MB] (31 MBps) Copying: 61/256 [MB] (30 MBps) Copying: 90/256 [MB] (28 MBps) Copying: 121/256 [MB] (31 MBps) Copying: 150/256 [MB] (28 MBps) Copying: 179/256 [MB] (29 MBps) Copying: 208/256 [MB] (28 MBps) Copying: 236/256 [MB] (28 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-07-15 19:44:26.275803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:35.725 [2024-07-15 19:44:26.296043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.296129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:35.725 [2024-07-15 19:44:26.296148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:35.725 [2024-07-15 19:44:26.296160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.296190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:35.725 [2024-07-15 19:44:26.300661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.300706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:35.725 [2024-07-15 19:44:26.300720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.451 ms 00:22:35.725 [2024-07-15 19:44:26.300730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.300992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.301005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:35.725 [2024-07-15 19:44:26.301016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:35.725 [2024-07-15 19:44:26.301027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.304322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.304349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:35.725 [2024-07-15 19:44:26.304361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:22:35.725 [2024-07-15 19:44:26.304393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.311046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.311083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:35.725 [2024-07-15 19:44:26.311097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.628 ms 00:22:35.725 [2024-07-15 19:44:26.311107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.354566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.354642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:35.725 [2024-07-15 19:44:26.354659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.374 ms 00:22:35.725 [2024-07-15 19:44:26.354686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.378638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.378718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:35.725 [2024-07-15 19:44:26.378736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.812 ms 00:22:35.725 [2024-07-15 19:44:26.378748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.378974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.378990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:35.725 [2024-07-15 19:44:26.379002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:22:35.725 [2024-07-15 19:44:26.379014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.421629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.421702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:35.725 [2024-07-15 19:44:26.421719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.592 ms 00:22:35.725 [2024-07-15 19:44:26.421730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.464766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.464853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:35.725 [2024-07-15 19:44:26.464869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.922 ms 00:22:35.725 [2024-07-15 19:44:26.464880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.725 [2024-07-15 19:44:26.507325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.725 [2024-07-15 19:44:26.507396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:35.725 [2024-07-15 19:44:26.507412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.348 ms 00:22:35.725 [2024-07-15 19:44:26.507422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.984 [2024-07-15 19:44:26.547406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.984 [2024-07-15 19:44:26.547453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:35.984 [2024-07-15 19:44:26.547468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.884 ms 00:22:35.984 [2024-07-15 19:44:26.547478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.984 [2024-07-15 19:44:26.547539] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:35.984 [2024-07-15 19:44:26.547559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:35.984 [2024-07-15 19:44:26.547743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.547997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:35.985 [2024-07-15 19:44:26.548660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:35.985 [2024-07-15 19:44:26.548670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f52e51-6fc1-4d5a-8b5f-2a9f46a0322e 00:22:35.985 [2024-07-15 19:44:26.548681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:35.985 [2024-07-15 19:44:26.548691] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:35.985 [2024-07-15 19:44:26.548712] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:35.985 [2024-07-15 19:44:26.548723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:35.985 [2024-07-15 19:44:26.548733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:35.985 [2024-07-15 19:44:26.548743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:35.985 [2024-07-15 19:44:26.548752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:35.985 [2024-07-15 19:44:26.548761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:35.985 [2024-07-15 19:44:26.548770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:35.985 [2024-07-15 19:44:26.548791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.985 [2024-07-15 19:44:26.548802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:35.985 [2024-07-15 19:44:26.548813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:22:35.985 [2024-07-15 19:44:26.548827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.985 [2024-07-15 19:44:26.568930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.986 [2024-07-15 19:44:26.568969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:35.986 [2024-07-15 19:44:26.568982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.081 ms 00:22:35.986 [2024-07-15 19:44:26.569008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.986 [2024-07-15 19:44:26.569571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.986 [2024-07-15 19:44:26.569591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:35.986 [2024-07-15 19:44:26.569608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:22:35.986 [2024-07-15 19:44:26.569618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.986 [2024-07-15 19:44:26.619695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.986 [2024-07-15 19:44:26.619736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:35.986 [2024-07-15 19:44:26.619750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.986 [2024-07-15 19:44:26.619776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.986 [2024-07-15 19:44:26.619861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.986 [2024-07-15 19:44:26.619874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:35.986 [2024-07-15 19:44:26.619891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.986 [2024-07-15 19:44:26.619901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.986 [2024-07-15 19:44:26.619973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.986 [2024-07-15 19:44:26.619986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:35.986 [2024-07-15 19:44:26.619996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.986 [2024-07-15 19:44:26.620006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.986 [2024-07-15 19:44:26.620025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.986 [2024-07-15 19:44:26.620035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:35.986 [2024-07-15 19:44:26.620045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.986 [2024-07-15 19:44:26.620060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.986 [2024-07-15 19:44:26.745785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.986 [2024-07-15 19:44:26.745849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:35.986 [2024-07-15 19:44:26.745865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.986 [2024-07-15 19:44:26.745892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.851881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.851950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:36.244 [2024-07-15 19:44:26.851966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.851985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.852067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:36.244 [2024-07-15 19:44:26.852078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.852089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.852129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:36.244 [2024-07-15 19:44:26.852139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.852149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.852283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:36.244 [2024-07-15 19:44:26.852294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.852304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.852352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:36.244 [2024-07-15 19:44:26.852362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.852372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.852428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:36.244 [2024-07-15 19:44:26.852438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.852448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.244 [2024-07-15 19:44:26.852505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:36.244 [2024-07-15 19:44:26.852515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.244 [2024-07-15 19:44:26.852525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.244 [2024-07-15 19:44:26.852666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.633 ms, result 0 00:22:37.618 00:22:37.618 00:22:37.618 19:44:28 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:37.876 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:37.876 19:44:28 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:37.876 19:44:28 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:37.876 19:44:28 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:37.876 19:44:28 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:37.876 19:44:28 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:38.135 19:44:28 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:38.135 19:44:28 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81843 00:22:38.135 Process with pid 81843 is not found 00:22:38.135 19:44:28 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81843 ']' 00:22:38.135 19:44:28 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81843 00:22:38.135 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81843) - No such process 00:22:38.135 19:44:28 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81843 is not found' 00:22:38.135 00:22:38.135 real 1m10.485s 00:22:38.135 user 1m37.333s 00:22:38.135 sys 0m6.875s 00:22:38.135 19:44:28 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.135 19:44:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 ************************************ 00:22:38.135 END TEST ftl_trim 00:22:38.135 ************************************ 00:22:38.135 19:44:28 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:38.135 19:44:28 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:38.135 19:44:28 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:38.135 19:44:28 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.135 19:44:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 ************************************ 00:22:38.135 START TEST ftl_restore 00:22:38.135 ************************************ 00:22:38.135 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:38.135 * Looking for test storage... 00:22:38.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:38.135 19:44:28 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:38.135 19:44:28 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.OdnENnmIz7 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=82109 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 82109 00:22:38.394 19:44:28 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.394 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 82109 ']' 00:22:38.394 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.394 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.394 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.394 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.394 19:44:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:38.394 [2024-07-15 19:44:29.084447] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:38.394 [2024-07-15 19:44:29.084626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82109 ] 00:22:38.653 [2024-07-15 19:44:29.270498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.911 [2024-07-15 19:44:29.511394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.844 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.845 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:22:39.845 19:44:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:39.845 19:44:30 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:39.845 19:44:30 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:39.845 19:44:30 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:39.845 19:44:30 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:39.845 19:44:30 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:40.103 19:44:30 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:40.103 19:44:30 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:40.103 19:44:30 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:40.103 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:40.103 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:40.103 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:40.103 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:40.103 19:44:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:40.431 { 00:22:40.431 "name": "nvme0n1", 00:22:40.431 "aliases": [ 00:22:40.431 "0e5bae26-5b71-426e-8a12-f391f6d5ea8c" 00:22:40.431 ], 00:22:40.431 "product_name": "NVMe disk", 00:22:40.431 "block_size": 4096, 00:22:40.431 "num_blocks": 1310720, 00:22:40.431 "uuid": "0e5bae26-5b71-426e-8a12-f391f6d5ea8c", 00:22:40.431 "assigned_rate_limits": { 00:22:40.431 "rw_ios_per_sec": 0, 00:22:40.431 "rw_mbytes_per_sec": 0, 00:22:40.431 "r_mbytes_per_sec": 0, 00:22:40.431 "w_mbytes_per_sec": 0 00:22:40.431 }, 00:22:40.431 "claimed": true, 00:22:40.431 "claim_type": "read_many_write_one", 00:22:40.431 "zoned": false, 00:22:40.431 "supported_io_types": { 00:22:40.431 "read": true, 00:22:40.431 "write": true, 00:22:40.431 "unmap": true, 00:22:40.431 "flush": true, 00:22:40.431 "reset": true, 00:22:40.431 "nvme_admin": true, 00:22:40.431 "nvme_io": true, 00:22:40.431 "nvme_io_md": false, 00:22:40.431 "write_zeroes": true, 00:22:40.431 "zcopy": false, 00:22:40.431 "get_zone_info": false, 00:22:40.431 "zone_management": false, 00:22:40.431 "zone_append": false, 00:22:40.431 "compare": true, 00:22:40.431 "compare_and_write": false, 00:22:40.431 "abort": true, 00:22:40.431 "seek_hole": false, 00:22:40.431 "seek_data": false, 00:22:40.431 "copy": true, 00:22:40.431 "nvme_iov_md": false 00:22:40.431 }, 00:22:40.431 "driver_specific": { 00:22:40.431 "nvme": [ 00:22:40.431 { 00:22:40.431 "pci_address": "0000:00:11.0", 00:22:40.431 "trid": { 00:22:40.431 "trtype": "PCIe", 00:22:40.431 "traddr": "0000:00:11.0" 00:22:40.431 }, 00:22:40.431 "ctrlr_data": { 00:22:40.431 "cntlid": 0, 00:22:40.431 "vendor_id": "0x1b36", 00:22:40.431 "model_number": "QEMU NVMe Ctrl", 00:22:40.431 "serial_number": "12341", 00:22:40.431 "firmware_revision": "8.0.0", 00:22:40.431 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:40.431 "oacs": { 00:22:40.431 "security": 0, 00:22:40.431 "format": 1, 00:22:40.431 "firmware": 0, 00:22:40.431 "ns_manage": 1 00:22:40.431 }, 00:22:40.431 "multi_ctrlr": false, 00:22:40.431 "ana_reporting": false 00:22:40.431 }, 00:22:40.431 "vs": { 00:22:40.431 "nvme_version": "1.4" 00:22:40.431 }, 00:22:40.431 "ns_data": { 00:22:40.431 "id": 1, 00:22:40.431 "can_share": false 00:22:40.431 } 00:22:40.431 } 00:22:40.431 ], 00:22:40.431 "mp_policy": "active_passive" 00:22:40.431 } 00:22:40.431 } 00:22:40.431 ]' 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:40.431 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:22:40.431 19:44:31 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:40.431 19:44:31 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:40.431 19:44:31 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:40.431 19:44:31 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:40.431 19:44:31 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:40.689 19:44:31 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e315323a-c5a8-4b52-ba26-cdacd06e1923 00:22:40.689 19:44:31 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:40.689 19:44:31 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e315323a-c5a8-4b52-ba26-cdacd06e1923 00:22:40.689 19:44:31 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ea6fff8a-dccc-41e4-acf1-1f9378016a84 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ea6fff8a-dccc-41e4-acf1-1f9378016a84 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:41.252 19:44:31 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:41.252 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:41.252 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:41.252 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:41.252 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:41.252 19:44:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:41.510 { 00:22:41.510 "name": "9c16d2df-ebf1-42cc-a2b8-21b45c2eb584", 00:22:41.510 "aliases": [ 00:22:41.510 "lvs/nvme0n1p0" 00:22:41.510 ], 00:22:41.510 "product_name": "Logical Volume", 00:22:41.510 "block_size": 4096, 00:22:41.510 "num_blocks": 26476544, 00:22:41.510 "uuid": "9c16d2df-ebf1-42cc-a2b8-21b45c2eb584", 00:22:41.510 "assigned_rate_limits": { 00:22:41.510 "rw_ios_per_sec": 0, 00:22:41.510 "rw_mbytes_per_sec": 0, 00:22:41.510 "r_mbytes_per_sec": 0, 00:22:41.510 "w_mbytes_per_sec": 0 00:22:41.510 }, 00:22:41.510 "claimed": false, 00:22:41.510 "zoned": false, 00:22:41.510 "supported_io_types": { 00:22:41.510 "read": true, 00:22:41.510 "write": true, 00:22:41.510 "unmap": true, 00:22:41.510 "flush": false, 00:22:41.510 "reset": true, 00:22:41.510 "nvme_admin": false, 00:22:41.510 "nvme_io": false, 00:22:41.510 "nvme_io_md": false, 00:22:41.510 "write_zeroes": true, 00:22:41.510 "zcopy": false, 00:22:41.510 "get_zone_info": false, 00:22:41.510 "zone_management": false, 00:22:41.510 "zone_append": false, 00:22:41.510 "compare": false, 00:22:41.510 "compare_and_write": false, 00:22:41.510 "abort": false, 00:22:41.510 "seek_hole": true, 00:22:41.510 "seek_data": true, 00:22:41.510 "copy": false, 00:22:41.510 "nvme_iov_md": false 00:22:41.510 }, 00:22:41.510 "driver_specific": { 00:22:41.510 "lvol": { 00:22:41.510 "lvol_store_uuid": "ea6fff8a-dccc-41e4-acf1-1f9378016a84", 00:22:41.510 "base_bdev": "nvme0n1", 00:22:41.510 "thin_provision": true, 00:22:41.510 "num_allocated_clusters": 0, 00:22:41.510 "snapshot": false, 00:22:41.510 "clone": false, 00:22:41.510 "esnap_clone": false 00:22:41.510 } 00:22:41.510 } 00:22:41.510 } 00:22:41.510 ]' 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:41.510 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:41.510 19:44:32 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:41.510 19:44:32 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:41.768 19:44:32 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:42.026 19:44:32 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:42.026 19:44:32 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:42.026 19:44:32 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:42.026 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:42.026 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:42.026 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:42.026 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:42.027 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:42.284 { 00:22:42.284 "name": "9c16d2df-ebf1-42cc-a2b8-21b45c2eb584", 00:22:42.284 "aliases": [ 00:22:42.284 "lvs/nvme0n1p0" 00:22:42.284 ], 00:22:42.284 "product_name": "Logical Volume", 00:22:42.284 "block_size": 4096, 00:22:42.284 "num_blocks": 26476544, 00:22:42.284 "uuid": "9c16d2df-ebf1-42cc-a2b8-21b45c2eb584", 00:22:42.284 "assigned_rate_limits": { 00:22:42.284 "rw_ios_per_sec": 0, 00:22:42.284 "rw_mbytes_per_sec": 0, 00:22:42.284 "r_mbytes_per_sec": 0, 00:22:42.284 "w_mbytes_per_sec": 0 00:22:42.284 }, 00:22:42.284 "claimed": false, 00:22:42.284 "zoned": false, 00:22:42.284 "supported_io_types": { 00:22:42.284 "read": true, 00:22:42.284 "write": true, 00:22:42.284 "unmap": true, 00:22:42.284 "flush": false, 00:22:42.284 "reset": true, 00:22:42.284 "nvme_admin": false, 00:22:42.284 "nvme_io": false, 00:22:42.284 "nvme_io_md": false, 00:22:42.284 "write_zeroes": true, 00:22:42.284 "zcopy": false, 00:22:42.284 "get_zone_info": false, 00:22:42.284 "zone_management": false, 00:22:42.284 "zone_append": false, 00:22:42.284 "compare": false, 00:22:42.284 "compare_and_write": false, 00:22:42.284 "abort": false, 00:22:42.284 "seek_hole": true, 00:22:42.284 "seek_data": true, 00:22:42.284 "copy": false, 00:22:42.284 "nvme_iov_md": false 00:22:42.284 }, 00:22:42.284 "driver_specific": { 00:22:42.284 "lvol": { 00:22:42.284 "lvol_store_uuid": "ea6fff8a-dccc-41e4-acf1-1f9378016a84", 00:22:42.284 "base_bdev": "nvme0n1", 00:22:42.284 "thin_provision": true, 00:22:42.284 "num_allocated_clusters": 0, 00:22:42.284 "snapshot": false, 00:22:42.284 "clone": false, 00:22:42.284 "esnap_clone": false 00:22:42.284 } 00:22:42.284 } 00:22:42.284 } 00:22:42.284 ]' 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:42.284 19:44:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:42.284 19:44:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:42.284 19:44:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:42.544 19:44:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:42.544 19:44:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:42.544 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:42.544 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:42.544 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:42.544 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:42.544 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:42.802 { 00:22:42.802 "name": "9c16d2df-ebf1-42cc-a2b8-21b45c2eb584", 00:22:42.802 "aliases": [ 00:22:42.802 "lvs/nvme0n1p0" 00:22:42.802 ], 00:22:42.802 "product_name": "Logical Volume", 00:22:42.802 "block_size": 4096, 00:22:42.802 "num_blocks": 26476544, 00:22:42.802 "uuid": "9c16d2df-ebf1-42cc-a2b8-21b45c2eb584", 00:22:42.802 "assigned_rate_limits": { 00:22:42.802 "rw_ios_per_sec": 0, 00:22:42.802 "rw_mbytes_per_sec": 0, 00:22:42.802 "r_mbytes_per_sec": 0, 00:22:42.802 "w_mbytes_per_sec": 0 00:22:42.802 }, 00:22:42.802 "claimed": false, 00:22:42.802 "zoned": false, 00:22:42.802 "supported_io_types": { 00:22:42.802 "read": true, 00:22:42.802 "write": true, 00:22:42.802 "unmap": true, 00:22:42.802 "flush": false, 00:22:42.802 "reset": true, 00:22:42.802 "nvme_admin": false, 00:22:42.802 "nvme_io": false, 00:22:42.802 "nvme_io_md": false, 00:22:42.802 "write_zeroes": true, 00:22:42.802 "zcopy": false, 00:22:42.802 "get_zone_info": false, 00:22:42.802 "zone_management": false, 00:22:42.802 "zone_append": false, 00:22:42.802 "compare": false, 00:22:42.802 "compare_and_write": false, 00:22:42.802 "abort": false, 00:22:42.802 "seek_hole": true, 00:22:42.802 "seek_data": true, 00:22:42.802 "copy": false, 00:22:42.802 "nvme_iov_md": false 00:22:42.802 }, 00:22:42.802 "driver_specific": { 00:22:42.802 "lvol": { 00:22:42.802 "lvol_store_uuid": "ea6fff8a-dccc-41e4-acf1-1f9378016a84", 00:22:42.802 "base_bdev": "nvme0n1", 00:22:42.802 "thin_provision": true, 00:22:42.802 "num_allocated_clusters": 0, 00:22:42.802 "snapshot": false, 00:22:42.802 "clone": false, 00:22:42.802 "esnap_clone": false 00:22:42.802 } 00:22:42.802 } 00:22:42.802 } 00:22:42.802 ]' 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:42.802 19:44:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:42.802 19:44:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:42.802 19:44:33 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 --l2p_dram_limit 10' 00:22:42.802 19:44:33 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:42.802 19:44:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:42.802 19:44:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:42.803 19:44:33 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:42.803 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:42.803 19:44:33 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9c16d2df-ebf1-42cc-a2b8-21b45c2eb584 --l2p_dram_limit 10 -c nvc0n1p0 00:22:43.061 [2024-07-15 19:44:33.650505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.650575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:43.061 [2024-07-15 19:44:33.650593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:43.061 [2024-07-15 19:44:33.650608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.650676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.650691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:43.061 [2024-07-15 19:44:33.650704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:43.061 [2024-07-15 19:44:33.650718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.650741] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:43.061 [2024-07-15 19:44:33.651920] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:43.061 [2024-07-15 19:44:33.651949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.651967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:43.061 [2024-07-15 19:44:33.651979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:22:43.061 [2024-07-15 19:44:33.651993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.652073] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f3128da4-5021-4384-924c-f29450b8d9c2 00:22:43.061 [2024-07-15 19:44:33.653531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.653568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:43.061 [2024-07-15 19:44:33.653584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:43.061 [2024-07-15 19:44:33.653596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.661377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.661410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:43.061 [2024-07-15 19:44:33.661430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.718 ms 00:22:43.061 [2024-07-15 19:44:33.661442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.661556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.661572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:43.061 [2024-07-15 19:44:33.661586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:43.061 [2024-07-15 19:44:33.661598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.661673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.661686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:43.061 [2024-07-15 19:44:33.661701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:43.061 [2024-07-15 19:44:33.661715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.661745] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:43.061 [2024-07-15 19:44:33.667793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.667833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:43.061 [2024-07-15 19:44:33.667845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:22:43.061 [2024-07-15 19:44:33.667859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.667899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.667913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:43.061 [2024-07-15 19:44:33.667924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:43.061 [2024-07-15 19:44:33.667936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.667983] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:43.061 [2024-07-15 19:44:33.668118] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:43.061 [2024-07-15 19:44:33.668132] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:43.061 [2024-07-15 19:44:33.668151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:43.061 [2024-07-15 19:44:33.668165] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668180] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668191] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:43.061 [2024-07-15 19:44:33.668203] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:43.061 [2024-07-15 19:44:33.668216] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:43.061 [2024-07-15 19:44:33.668230] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:43.061 [2024-07-15 19:44:33.668240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.668253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:43.061 [2024-07-15 19:44:33.668263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:22:43.061 [2024-07-15 19:44:33.668276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.668347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.061 [2024-07-15 19:44:33.668360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:43.061 [2024-07-15 19:44:33.668370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:43.061 [2024-07-15 19:44:33.668382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.061 [2024-07-15 19:44:33.668471] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:43.061 [2024-07-15 19:44:33.668488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:43.061 [2024-07-15 19:44:33.668510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:43.061 [2024-07-15 19:44:33.668546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:43.061 [2024-07-15 19:44:33.668578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:43.061 [2024-07-15 19:44:33.668598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:43.061 [2024-07-15 19:44:33.668611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:43.061 [2024-07-15 19:44:33.668621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:43.061 [2024-07-15 19:44:33.668634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:43.061 [2024-07-15 19:44:33.668644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:43.061 [2024-07-15 19:44:33.668656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:43.061 [2024-07-15 19:44:33.668682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:43.061 [2024-07-15 19:44:33.668713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:43.061 [2024-07-15 19:44:33.668747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:43.061 [2024-07-15 19:44:33.668794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:43.061 [2024-07-15 19:44:33.668828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:43.061 [2024-07-15 19:44:33.668849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:43.061 [2024-07-15 19:44:33.668859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:43.061 [2024-07-15 19:44:33.668896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:43.061 [2024-07-15 19:44:33.668908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:43.061 [2024-07-15 19:44:33.668917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:43.061 [2024-07-15 19:44:33.668928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:43.061 [2024-07-15 19:44:33.668938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:43.061 [2024-07-15 19:44:33.668952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:43.061 [2024-07-15 19:44:33.668973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:43.061 [2024-07-15 19:44:33.668982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.668994] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:43.061 [2024-07-15 19:44:33.669004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:43.061 [2024-07-15 19:44:33.669016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:43.061 [2024-07-15 19:44:33.669025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:43.061 [2024-07-15 19:44:33.669038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:43.061 [2024-07-15 19:44:33.669049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:43.061 [2024-07-15 19:44:33.669063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:43.061 [2024-07-15 19:44:33.669073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:43.061 [2024-07-15 19:44:33.669085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:43.061 [2024-07-15 19:44:33.669094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:43.062 [2024-07-15 19:44:33.669111] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:43.062 [2024-07-15 19:44:33.669123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:43.062 [2024-07-15 19:44:33.669152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:43.062 [2024-07-15 19:44:33.669165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:43.062 [2024-07-15 19:44:33.669176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:43.062 [2024-07-15 19:44:33.669189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:43.062 [2024-07-15 19:44:33.669199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:43.062 [2024-07-15 19:44:33.669212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:43.062 [2024-07-15 19:44:33.669222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:43.062 [2024-07-15 19:44:33.669237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:43.062 [2024-07-15 19:44:33.669247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:43.062 [2024-07-15 19:44:33.669309] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:43.062 [2024-07-15 19:44:33.669320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:43.062 [2024-07-15 19:44:33.669344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:43.062 [2024-07-15 19:44:33.669357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:43.062 [2024-07-15 19:44:33.669367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:43.062 [2024-07-15 19:44:33.669380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.062 [2024-07-15 19:44:33.669391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:43.062 [2024-07-15 19:44:33.669403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:22:43.062 [2024-07-15 19:44:33.669413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.062 [2024-07-15 19:44:33.669459] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:43.062 [2024-07-15 19:44:33.669472] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:45.599 [2024-07-15 19:44:36.008708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.599 [2024-07-15 19:44:36.008791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:45.599 [2024-07-15 19:44:36.008813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2339.226 ms 00:22:45.599 [2024-07-15 19:44:36.008825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.599 [2024-07-15 19:44:36.054083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.599 [2024-07-15 19:44:36.054145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.599 [2024-07-15 19:44:36.054165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.938 ms 00:22:45.599 [2024-07-15 19:44:36.054176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.599 [2024-07-15 19:44:36.054343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.599 [2024-07-15 19:44:36.054356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.599 [2024-07-15 19:44:36.054370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:45.599 [2024-07-15 19:44:36.054393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.599 [2024-07-15 19:44:36.105929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.599 [2024-07-15 19:44:36.105989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.599 [2024-07-15 19:44:36.106008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.486 ms 00:22:45.599 [2024-07-15 19:44:36.106018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.599 [2024-07-15 19:44:36.106077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.106097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.600 [2024-07-15 19:44:36.106111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:45.600 [2024-07-15 19:44:36.106121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.106634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.106650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.600 [2024-07-15 19:44:36.106663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:22:45.600 [2024-07-15 19:44:36.106674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.106804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.106818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.600 [2024-07-15 19:44:36.106836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:45.600 [2024-07-15 19:44:36.106846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.129009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.129060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.600 [2024-07-15 19:44:36.129077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.136 ms 00:22:45.600 [2024-07-15 19:44:36.129087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.143285] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.600 [2024-07-15 19:44:36.146546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.146581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.600 [2024-07-15 19:44:36.146596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.351 ms 00:22:45.600 [2024-07-15 19:44:36.146609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.233855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.233929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:45.600 [2024-07-15 19:44:36.233946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.200 ms 00:22:45.600 [2024-07-15 19:44:36.233960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.234177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.234198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.600 [2024-07-15 19:44:36.234210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:22:45.600 [2024-07-15 19:44:36.234226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.274897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.274977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:45.600 [2024-07-15 19:44:36.274994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.594 ms 00:22:45.600 [2024-07-15 19:44:36.275009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.314760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.314817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:45.600 [2024-07-15 19:44:36.314833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.694 ms 00:22:45.600 [2024-07-15 19:44:36.314846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.600 [2024-07-15 19:44:36.315698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.600 [2024-07-15 19:44:36.315727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.600 [2024-07-15 19:44:36.315741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:22:45.600 [2024-07-15 19:44:36.315757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.428825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.871 [2024-07-15 19:44:36.428898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:45.871 [2024-07-15 19:44:36.428916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.998 ms 00:22:45.871 [2024-07-15 19:44:36.428934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.476560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.871 [2024-07-15 19:44:36.476630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:45.871 [2024-07-15 19:44:36.476648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.554 ms 00:22:45.871 [2024-07-15 19:44:36.476662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.523666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.871 [2024-07-15 19:44:36.523740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:45.871 [2024-07-15 19:44:36.523759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.948 ms 00:22:45.871 [2024-07-15 19:44:36.523774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.570363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.871 [2024-07-15 19:44:36.570442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.871 [2024-07-15 19:44:36.570461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.509 ms 00:22:45.871 [2024-07-15 19:44:36.570484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.570576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.871 [2024-07-15 19:44:36.570602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.871 [2024-07-15 19:44:36.570621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:45.871 [2024-07-15 19:44:36.570647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.570808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.871 [2024-07-15 19:44:36.570836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.871 [2024-07-15 19:44:36.570857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:45.871 [2024-07-15 19:44:36.570873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.871 [2024-07-15 19:44:36.572156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2921.078 ms, result 0 00:22:45.871 { 00:22:45.871 "name": "ftl0", 00:22:45.871 "uuid": "f3128da4-5021-4384-924c-f29450b8d9c2" 00:22:45.871 } 00:22:45.871 19:44:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:45.871 19:44:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:46.129 19:44:36 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:46.129 19:44:36 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:46.385 [2024-07-15 19:44:37.011406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.011476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:46.385 [2024-07-15 19:44:37.011499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:46.385 [2024-07-15 19:44:37.011512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.011547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:46.385 [2024-07-15 19:44:37.016249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.016307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:46.385 [2024-07-15 19:44:37.016324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:22:46.385 [2024-07-15 19:44:37.016343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.016640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.016672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:46.385 [2024-07-15 19:44:37.016706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:22:46.385 [2024-07-15 19:44:37.016726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.019916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.019951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:46.385 [2024-07-15 19:44:37.019966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:22:46.385 [2024-07-15 19:44:37.019981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.026261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.026306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:46.385 [2024-07-15 19:44:37.026325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.254 ms 00:22:46.385 [2024-07-15 19:44:37.026341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.072993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.073059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:46.385 [2024-07-15 19:44:37.073077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.548 ms 00:22:46.385 [2024-07-15 19:44:37.073093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.100270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.100365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:46.385 [2024-07-15 19:44:37.100384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.119 ms 00:22:46.385 [2024-07-15 19:44:37.100398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.100587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.100606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:46.385 [2024-07-15 19:44:37.100618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:46.385 [2024-07-15 19:44:37.100632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.385 [2024-07-15 19:44:37.147575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.385 [2024-07-15 19:44:37.147641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:46.385 [2024-07-15 19:44:37.147659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.919 ms 00:22:46.385 [2024-07-15 19:44:37.147674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.645 [2024-07-15 19:44:37.193675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.645 [2024-07-15 19:44:37.193740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:46.645 [2024-07-15 19:44:37.193758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.942 ms 00:22:46.645 [2024-07-15 19:44:37.193773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.645 [2024-07-15 19:44:37.239710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.645 [2024-07-15 19:44:37.239792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:46.645 [2024-07-15 19:44:37.239812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.864 ms 00:22:46.645 [2024-07-15 19:44:37.239827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.645 [2024-07-15 19:44:37.285391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.645 [2024-07-15 19:44:37.285451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:46.645 [2024-07-15 19:44:37.285469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.436 ms 00:22:46.645 [2024-07-15 19:44:37.285483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.645 [2024-07-15 19:44:37.285534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:46.645 [2024-07-15 19:44:37.285557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:46.645 [2024-07-15 19:44:37.285765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.285999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.286988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:46.646 [2024-07-15 19:44:37.287275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:46.647 [2024-07-15 19:44:37.287295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f3128da4-5021-4384-924c-f29450b8d9c2 00:22:46.647 [2024-07-15 19:44:37.287320] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:46.647 [2024-07-15 19:44:37.287333] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:46.647 [2024-07-15 19:44:37.287350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:46.647 [2024-07-15 19:44:37.287363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:46.647 [2024-07-15 19:44:37.287378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:46.647 [2024-07-15 19:44:37.287390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:46.647 [2024-07-15 19:44:37.287405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:46.647 [2024-07-15 19:44:37.287416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:46.647 [2024-07-15 19:44:37.287429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:46.647 [2024-07-15 19:44:37.287441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.647 [2024-07-15 19:44:37.287457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:46.647 [2024-07-15 19:44:37.287470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:22:46.647 [2024-07-15 19:44:37.287485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.647 [2024-07-15 19:44:37.312508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.647 [2024-07-15 19:44:37.312582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:46.647 [2024-07-15 19:44:37.312599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.952 ms 00:22:46.647 [2024-07-15 19:44:37.312614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.647 [2024-07-15 19:44:37.313196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.647 [2024-07-15 19:44:37.313244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:46.647 [2024-07-15 19:44:37.313260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:22:46.647 [2024-07-15 19:44:37.313279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.647 [2024-07-15 19:44:37.389331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.647 [2024-07-15 19:44:37.389421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.647 [2024-07-15 19:44:37.389439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.647 [2024-07-15 19:44:37.389455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.647 [2024-07-15 19:44:37.389563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.647 [2024-07-15 19:44:37.389579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.647 [2024-07-15 19:44:37.389592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.647 [2024-07-15 19:44:37.389611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.647 [2024-07-15 19:44:37.389722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.647 [2024-07-15 19:44:37.389743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.647 [2024-07-15 19:44:37.389756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.647 [2024-07-15 19:44:37.389771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.647 [2024-07-15 19:44:37.389794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.647 [2024-07-15 19:44:37.389833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.647 [2024-07-15 19:44:37.389846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.647 [2024-07-15 19:44:37.389861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.540036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.540109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:46.905 [2024-07-15 19:44:37.540127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.540142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.665864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.665945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:46.905 [2024-07-15 19:44:37.665977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.665996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.666120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.666139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.905 [2024-07-15 19:44:37.666151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.666165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.666223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.666243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.905 [2024-07-15 19:44:37.666255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.666270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.666440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.666461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.905 [2024-07-15 19:44:37.666473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.666488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.666534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.666552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:46.905 [2024-07-15 19:44:37.666565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.666579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.666629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.666645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.905 [2024-07-15 19:44:37.666658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.666672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.666722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.905 [2024-07-15 19:44:37.666742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.905 [2024-07-15 19:44:37.666754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.905 [2024-07-15 19:44:37.666780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.905 [2024-07-15 19:44:37.667168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 655.716 ms, result 0 00:22:46.905 true 00:22:46.905 19:44:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 82109 00:22:46.905 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 82109 ']' 00:22:46.905 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 82109 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82109 00:22:47.163 killing process with pid 82109 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82109' 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 82109 00:22:47.163 19:44:37 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 82109 00:22:50.442 19:44:40 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:54.669 262144+0 records in 00:22:54.669 262144+0 records out 00:22:54.669 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.39863 s, 244 MB/s 00:22:54.669 19:44:45 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:56.572 19:44:47 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:56.572 [2024-07-15 19:44:47.317676] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:56.572 [2024-07-15 19:44:47.317831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82347 ] 00:22:56.831 [2024-07-15 19:44:47.488204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.089 [2024-07-15 19:44:47.833521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.656 [2024-07-15 19:44:48.235324] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:57.656 [2024-07-15 19:44:48.235399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:57.656 [2024-07-15 19:44:48.399747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.399822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:57.656 [2024-07-15 19:44:48.399838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:57.656 [2024-07-15 19:44:48.399848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.399907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.399920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.656 [2024-07-15 19:44:48.399932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:57.656 [2024-07-15 19:44:48.399946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.399967] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:57.656 [2024-07-15 19:44:48.401115] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:57.656 [2024-07-15 19:44:48.401141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.401155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.656 [2024-07-15 19:44:48.401166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:22:57.656 [2024-07-15 19:44:48.401176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.402633] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:57.656 [2024-07-15 19:44:48.423620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.423664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:57.656 [2024-07-15 19:44:48.423679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.987 ms 00:22:57.656 [2024-07-15 19:44:48.423690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.423763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.423775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:57.656 [2024-07-15 19:44:48.423806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:57.656 [2024-07-15 19:44:48.423816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.430878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.430912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:57.656 [2024-07-15 19:44:48.430923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.988 ms 00:22:57.656 [2024-07-15 19:44:48.430934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.431017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.431034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:57.656 [2024-07-15 19:44:48.431045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:57.656 [2024-07-15 19:44:48.431056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.431102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.431114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:57.656 [2024-07-15 19:44:48.431124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:57.656 [2024-07-15 19:44:48.431134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.431161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:57.656 [2024-07-15 19:44:48.437006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.437040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:57.656 [2024-07-15 19:44:48.437052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.852 ms 00:22:57.656 [2024-07-15 19:44:48.437062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.437099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.437110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:57.656 [2024-07-15 19:44:48.437121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:57.656 [2024-07-15 19:44:48.437130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.437182] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:57.656 [2024-07-15 19:44:48.437207] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:57.656 [2024-07-15 19:44:48.437242] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:57.656 [2024-07-15 19:44:48.437262] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:57.656 [2024-07-15 19:44:48.437351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:57.656 [2024-07-15 19:44:48.437364] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:57.656 [2024-07-15 19:44:48.437377] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:57.656 [2024-07-15 19:44:48.437390] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437402] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437412] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:57.656 [2024-07-15 19:44:48.437422] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:57.656 [2024-07-15 19:44:48.437433] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:57.656 [2024-07-15 19:44:48.437443] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:57.656 [2024-07-15 19:44:48.437453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.437466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:57.656 [2024-07-15 19:44:48.437477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:22:57.656 [2024-07-15 19:44:48.437486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.437555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.656 [2024-07-15 19:44:48.437565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:57.656 [2024-07-15 19:44:48.437576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:57.656 [2024-07-15 19:44:48.437585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.656 [2024-07-15 19:44:48.437671] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:57.656 [2024-07-15 19:44:48.437683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:57.656 [2024-07-15 19:44:48.437697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:57.656 [2024-07-15 19:44:48.437726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:57.656 [2024-07-15 19:44:48.437756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:57.656 [2024-07-15 19:44:48.437774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:57.656 [2024-07-15 19:44:48.437801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:57.656 [2024-07-15 19:44:48.437811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:57.656 [2024-07-15 19:44:48.437821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:57.656 [2024-07-15 19:44:48.437830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:57.656 [2024-07-15 19:44:48.437840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:57.656 [2024-07-15 19:44:48.437858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:57.656 [2024-07-15 19:44:48.437914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:57.656 [2024-07-15 19:44:48.437942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:57.656 [2024-07-15 19:44:48.437969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:57.656 [2024-07-15 19:44:48.437978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.656 [2024-07-15 19:44:48.437987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:57.656 [2024-07-15 19:44:48.437997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:57.656 [2024-07-15 19:44:48.438006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:57.656 [2024-07-15 19:44:48.438015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:57.656 [2024-07-15 19:44:48.438024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:57.656 [2024-07-15 19:44:48.438033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:57.656 [2024-07-15 19:44:48.438042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:57.656 [2024-07-15 19:44:48.438051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:57.656 [2024-07-15 19:44:48.438060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:57.656 [2024-07-15 19:44:48.438070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:57.656 [2024-07-15 19:44:48.438079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:57.656 [2024-07-15 19:44:48.438088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.438096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:57.656 [2024-07-15 19:44:48.438105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:57.656 [2024-07-15 19:44:48.438114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.438122] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:57.656 [2024-07-15 19:44:48.438133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:57.656 [2024-07-15 19:44:48.438143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:57.656 [2024-07-15 19:44:48.438152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:57.656 [2024-07-15 19:44:48.438162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:57.656 [2024-07-15 19:44:48.438172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:57.657 [2024-07-15 19:44:48.438181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:57.657 [2024-07-15 19:44:48.438190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:57.657 [2024-07-15 19:44:48.438199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:57.657 [2024-07-15 19:44:48.438208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:57.657 [2024-07-15 19:44:48.438218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:57.657 [2024-07-15 19:44:48.438231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:57.657 [2024-07-15 19:44:48.438252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:57.657 [2024-07-15 19:44:48.438262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:57.657 [2024-07-15 19:44:48.438272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:57.657 [2024-07-15 19:44:48.438282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:57.657 [2024-07-15 19:44:48.438308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:57.657 [2024-07-15 19:44:48.438320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:57.657 [2024-07-15 19:44:48.438331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:57.657 [2024-07-15 19:44:48.438343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:57.657 [2024-07-15 19:44:48.438354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:57.657 [2024-07-15 19:44:48.438418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:57.657 [2024-07-15 19:44:48.438430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:57.657 [2024-07-15 19:44:48.438453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:57.657 [2024-07-15 19:44:48.438466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:57.657 [2024-07-15 19:44:48.438477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:57.657 [2024-07-15 19:44:48.438489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.657 [2024-07-15 19:44:48.438507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:57.657 [2024-07-15 19:44:48.438518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:22:57.657 [2024-07-15 19:44:48.438528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.494014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.494078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:57.964 [2024-07-15 19:44:48.494095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.429 ms 00:22:57.964 [2024-07-15 19:44:48.494107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.494217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.494229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:57.964 [2024-07-15 19:44:48.494241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:57.964 [2024-07-15 19:44:48.494251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.545287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.545342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:57.964 [2024-07-15 19:44:48.545357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.953 ms 00:22:57.964 [2024-07-15 19:44:48.545367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.545425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.545436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.964 [2024-07-15 19:44:48.545447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:57.964 [2024-07-15 19:44:48.545456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.545987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.546004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.964 [2024-07-15 19:44:48.546017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:22:57.964 [2024-07-15 19:44:48.546027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.546149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.546163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.964 [2024-07-15 19:44:48.546173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:57.964 [2024-07-15 19:44:48.546182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.567878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.567923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.964 [2024-07-15 19:44:48.567937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.674 ms 00:22:57.964 [2024-07-15 19:44:48.567948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.589149] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:57.964 [2024-07-15 19:44:48.589205] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:57.964 [2024-07-15 19:44:48.589226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.589237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:57.964 [2024-07-15 19:44:48.589250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.147 ms 00:22:57.964 [2024-07-15 19:44:48.589260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.622122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.622171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:57.964 [2024-07-15 19:44:48.622186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.806 ms 00:22:57.964 [2024-07-15 19:44:48.622196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.964 [2024-07-15 19:44:48.642834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.964 [2024-07-15 19:44:48.642874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:57.965 [2024-07-15 19:44:48.642887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.597 ms 00:22:57.965 [2024-07-15 19:44:48.642897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.965 [2024-07-15 19:44:48.663547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.965 [2024-07-15 19:44:48.663590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:57.965 [2024-07-15 19:44:48.663604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.596 ms 00:22:57.965 [2024-07-15 19:44:48.663614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.965 [2024-07-15 19:44:48.664528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.965 [2024-07-15 19:44:48.664564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:57.965 [2024-07-15 19:44:48.664577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:22:57.965 [2024-07-15 19:44:48.664588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.760827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.760899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:58.224 [2024-07-15 19:44:48.760917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.212 ms 00:22:58.224 [2024-07-15 19:44:48.760928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.774027] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.224 [2024-07-15 19:44:48.777366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.777407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.224 [2024-07-15 19:44:48.777422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.357 ms 00:22:58.224 [2024-07-15 19:44:48.777449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.777571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.777584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:58.224 [2024-07-15 19:44:48.777595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:58.224 [2024-07-15 19:44:48.777605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.777678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.777690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.224 [2024-07-15 19:44:48.777705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:58.224 [2024-07-15 19:44:48.777715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.777735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.777745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.224 [2024-07-15 19:44:48.777755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:58.224 [2024-07-15 19:44:48.777765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.777817] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:58.224 [2024-07-15 19:44:48.777830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.777840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:58.224 [2024-07-15 19:44:48.777850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:58.224 [2024-07-15 19:44:48.777863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.819480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.819554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.224 [2024-07-15 19:44:48.819572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.592 ms 00:22:58.224 [2024-07-15 19:44:48.819583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.819707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.224 [2024-07-15 19:44:48.819721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.224 [2024-07-15 19:44:48.819744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:58.224 [2024-07-15 19:44:48.819754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.224 [2024-07-15 19:44:48.820988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.726 ms, result 0 00:23:30.671  Copying: 32/1024 [MB] (32 MBps) Copying: 66/1024 [MB] (34 MBps) Copying: 98/1024 [MB] (31 MBps) Copying: 128/1024 [MB] (30 MBps) Copying: 159/1024 [MB] (30 MBps) Copying: 192/1024 [MB] (32 MBps) Copying: 224/1024 [MB] (32 MBps) Copying: 256/1024 [MB] (31 MBps) Copying: 286/1024 [MB] (30 MBps) Copying: 318/1024 [MB] (31 MBps) Copying: 353/1024 [MB] (34 MBps) Copying: 387/1024 [MB] (34 MBps) Copying: 414/1024 [MB] (26 MBps) Copying: 443/1024 [MB] (29 MBps) Copying: 472/1024 [MB] (29 MBps) Copying: 502/1024 [MB] (29 MBps) Copying: 530/1024 [MB] (28 MBps) Copying: 560/1024 [MB] (29 MBps) Copying: 596/1024 [MB] (36 MBps) Copying: 631/1024 [MB] (34 MBps) Copying: 665/1024 [MB] (33 MBps) Copying: 696/1024 [MB] (31 MBps) Copying: 726/1024 [MB] (29 MBps) Copying: 756/1024 [MB] (30 MBps) Copying: 786/1024 [MB] (30 MBps) Copying: 818/1024 [MB] (31 MBps) Copying: 849/1024 [MB] (30 MBps) Copying: 881/1024 [MB] (31 MBps) Copying: 913/1024 [MB] (32 MBps) Copying: 944/1024 [MB] (30 MBps) Copying: 975/1024 [MB] (30 MBps) Copying: 1005/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-07-15 19:45:21.438680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.671 [2024-07-15 19:45:21.438761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.671 [2024-07-15 19:45:21.438797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:30.671 [2024-07-15 19:45:21.438810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.671 [2024-07-15 19:45:21.438836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.671 [2024-07-15 19:45:21.442658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.671 [2024-07-15 19:45:21.442694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.671 [2024-07-15 19:45:21.442707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.802 ms 00:23:30.671 [2024-07-15 19:45:21.442718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.671 [2024-07-15 19:45:21.445422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.671 [2024-07-15 19:45:21.445466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.671 [2024-07-15 19:45:21.445489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:23:30.671 [2024-07-15 19:45:21.445500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.462404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.462444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.929 [2024-07-15 19:45:21.462458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:23:30.929 [2024-07-15 19:45:21.462468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.467723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.467769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.929 [2024-07-15 19:45:21.467799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.204 ms 00:23:30.929 [2024-07-15 19:45:21.467809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.507035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.507087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.929 [2024-07-15 19:45:21.507102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.166 ms 00:23:30.929 [2024-07-15 19:45:21.507112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.531517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.531565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.929 [2024-07-15 19:45:21.531581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.366 ms 00:23:30.929 [2024-07-15 19:45:21.531591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.531733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.531749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.929 [2024-07-15 19:45:21.531761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:23:30.929 [2024-07-15 19:45:21.531771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.570662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.570702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:30.929 [2024-07-15 19:45:21.570717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.853 ms 00:23:30.929 [2024-07-15 19:45:21.570727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.609343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.609390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:30.929 [2024-07-15 19:45:21.609404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.578 ms 00:23:30.929 [2024-07-15 19:45:21.609414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.646886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.646931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.929 [2024-07-15 19:45:21.646944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.432 ms 00:23:30.929 [2024-07-15 19:45:21.646967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.683992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.929 [2024-07-15 19:45:21.684031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.929 [2024-07-15 19:45:21.684043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.946 ms 00:23:30.929 [2024-07-15 19:45:21.684053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.929 [2024-07-15 19:45:21.684090] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.929 [2024-07-15 19:45:21.684124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.929 [2024-07-15 19:45:21.684469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.684999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.930 [2024-07-15 19:45:21.685215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-07-15 19:45:21.685226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-07-15 19:45:21.685245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.931 [2024-07-15 19:45:21.685256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f3128da4-5021-4384-924c-f29450b8d9c2 00:23:30.931 [2024-07-15 19:45:21.685267] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:30.931 [2024-07-15 19:45:21.685277] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:30.931 [2024-07-15 19:45:21.685286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:30.931 [2024-07-15 19:45:21.685302] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:30.931 [2024-07-15 19:45:21.685311] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.931 [2024-07-15 19:45:21.685322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.931 [2024-07-15 19:45:21.685331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.931 [2024-07-15 19:45:21.685341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.931 [2024-07-15 19:45:21.685350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.931 [2024-07-15 19:45:21.685360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.931 [2024-07-15 19:45:21.685370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.931 [2024-07-15 19:45:21.685380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:23:30.931 [2024-07-15 19:45:21.685390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.931 [2024-07-15 19:45:21.705377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.931 [2024-07-15 19:45:21.705433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.931 [2024-07-15 19:45:21.705447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.949 ms 00:23:30.931 [2024-07-15 19:45:21.705468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.931 [2024-07-15 19:45:21.706027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.931 [2024-07-15 19:45:21.706040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.931 [2024-07-15 19:45:21.706050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:23:30.931 [2024-07-15 19:45:21.706060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.749763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.749830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.188 [2024-07-15 19:45:21.749844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.749854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.749917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.749928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.188 [2024-07-15 19:45:21.749938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.749948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.750026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.750045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.188 [2024-07-15 19:45:21.750055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.750064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.750081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.750091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.188 [2024-07-15 19:45:21.750101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.750111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.869936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.870008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.188 [2024-07-15 19:45:21.870023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.870033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.975122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.975179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.188 [2024-07-15 19:45:21.975195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.975223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.975295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.975308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.188 [2024-07-15 19:45:21.975319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.975338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.975380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.188 [2024-07-15 19:45:21.975392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.188 [2024-07-15 19:45:21.975404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.188 [2024-07-15 19:45:21.975415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.188 [2024-07-15 19:45:21.975522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.189 [2024-07-15 19:45:21.975536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.189 [2024-07-15 19:45:21.975548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.189 [2024-07-15 19:45:21.975564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.189 [2024-07-15 19:45:21.975606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.189 [2024-07-15 19:45:21.975620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:31.189 [2024-07-15 19:45:21.975632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.189 [2024-07-15 19:45:21.975643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.189 [2024-07-15 19:45:21.975682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.189 [2024-07-15 19:45:21.975695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.189 [2024-07-15 19:45:21.975706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.189 [2024-07-15 19:45:21.975718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.189 [2024-07-15 19:45:21.975770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.189 [2024-07-15 19:45:21.975783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.189 [2024-07-15 19:45:21.975808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.189 [2024-07-15 19:45:21.975820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.189 [2024-07-15 19:45:21.975959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.231 ms, result 0 00:23:33.087 00:23:33.087 00:23:33.087 19:45:23 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:33.087 [2024-07-15 19:45:23.753884] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:23:33.087 [2024-07-15 19:45:23.754057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82709 ] 00:23:33.346 [2024-07-15 19:45:23.923280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.604 [2024-07-15 19:45:24.162039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.863 [2024-07-15 19:45:24.575160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:33.863 [2024-07-15 19:45:24.575233] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.122 [2024-07-15 19:45:24.736673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.736733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:34.122 [2024-07-15 19:45:24.736750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:34.122 [2024-07-15 19:45:24.736761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.736845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.736860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.122 [2024-07-15 19:45:24.736872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:34.122 [2024-07-15 19:45:24.736904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.736928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:34.122 [2024-07-15 19:45:24.738047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:34.122 [2024-07-15 19:45:24.738079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.738093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.122 [2024-07-15 19:45:24.738105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:23:34.122 [2024-07-15 19:45:24.738116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.739596] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:34.122 [2024-07-15 19:45:24.758991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.759030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:34.122 [2024-07-15 19:45:24.759044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.396 ms 00:23:34.122 [2024-07-15 19:45:24.759055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.759128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.759141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:34.122 [2024-07-15 19:45:24.759155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:34.122 [2024-07-15 19:45:24.759166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.766117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.766158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.122 [2024-07-15 19:45:24.766170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:23:34.122 [2024-07-15 19:45:24.766181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.766261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.766278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.122 [2024-07-15 19:45:24.766290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:34.122 [2024-07-15 19:45:24.766300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.766346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.766358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:34.122 [2024-07-15 19:45:24.766369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:34.122 [2024-07-15 19:45:24.766379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.122 [2024-07-15 19:45:24.766414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:34.122 [2024-07-15 19:45:24.772119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.122 [2024-07-15 19:45:24.772154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.122 [2024-07-15 19:45:24.772167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.711 ms 00:23:34.122 [2024-07-15 19:45:24.772177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.123 [2024-07-15 19:45:24.772215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.123 [2024-07-15 19:45:24.772227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:34.123 [2024-07-15 19:45:24.772237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:34.123 [2024-07-15 19:45:24.772247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.123 [2024-07-15 19:45:24.772298] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:34.123 [2024-07-15 19:45:24.772330] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:34.123 [2024-07-15 19:45:24.772368] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:34.123 [2024-07-15 19:45:24.772389] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:34.123 [2024-07-15 19:45:24.772475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:34.123 [2024-07-15 19:45:24.772490] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:34.123 [2024-07-15 19:45:24.772504] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:34.123 [2024-07-15 19:45:24.772518] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:34.123 [2024-07-15 19:45:24.772530] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:34.123 [2024-07-15 19:45:24.772541] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:34.123 [2024-07-15 19:45:24.772552] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:34.123 [2024-07-15 19:45:24.772562] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:34.123 [2024-07-15 19:45:24.772571] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:34.123 [2024-07-15 19:45:24.772581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.123 [2024-07-15 19:45:24.772595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:34.123 [2024-07-15 19:45:24.772606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:23:34.123 [2024-07-15 19:45:24.772616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.123 [2024-07-15 19:45:24.772686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.123 [2024-07-15 19:45:24.772697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:34.123 [2024-07-15 19:45:24.772708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:34.123 [2024-07-15 19:45:24.772717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.123 [2024-07-15 19:45:24.772828] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:34.123 [2024-07-15 19:45:24.772843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:34.123 [2024-07-15 19:45:24.772858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.123 [2024-07-15 19:45:24.772869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.772880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:34.123 [2024-07-15 19:45:24.772890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.772900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:34.123 [2024-07-15 19:45:24.772911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:34.123 [2024-07-15 19:45:24.772921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:34.123 [2024-07-15 19:45:24.772931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.123 [2024-07-15 19:45:24.772942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:34.123 [2024-07-15 19:45:24.772954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:34.123 [2024-07-15 19:45:24.772963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.123 [2024-07-15 19:45:24.772973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:34.123 [2024-07-15 19:45:24.772982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:34.123 [2024-07-15 19:45:24.772991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:34.123 [2024-07-15 19:45:24.773009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:34.123 [2024-07-15 19:45:24.773050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:34.123 [2024-07-15 19:45:24.773078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:34.123 [2024-07-15 19:45:24.773106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:34.123 [2024-07-15 19:45:24.773133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:34.123 [2024-07-15 19:45:24.773160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.123 [2024-07-15 19:45:24.773177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:34.123 [2024-07-15 19:45:24.773186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:34.123 [2024-07-15 19:45:24.773195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.123 [2024-07-15 19:45:24.773204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:34.123 [2024-07-15 19:45:24.773213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:34.123 [2024-07-15 19:45:24.773222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:34.123 [2024-07-15 19:45:24.773240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:34.123 [2024-07-15 19:45:24.773249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773258] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:34.123 [2024-07-15 19:45:24.773269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:34.123 [2024-07-15 19:45:24.773278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.123 [2024-07-15 19:45:24.773298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:34.123 [2024-07-15 19:45:24.773307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:34.123 [2024-07-15 19:45:24.773316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:34.123 [2024-07-15 19:45:24.773326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:34.123 [2024-07-15 19:45:24.773336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:34.123 [2024-07-15 19:45:24.773347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:34.123 [2024-07-15 19:45:24.773358] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:34.123 [2024-07-15 19:45:24.773370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:34.123 [2024-07-15 19:45:24.773392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:34.123 [2024-07-15 19:45:24.773403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:34.123 [2024-07-15 19:45:24.773414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:34.123 [2024-07-15 19:45:24.773425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:34.123 [2024-07-15 19:45:24.773436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:34.123 [2024-07-15 19:45:24.773446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:34.123 [2024-07-15 19:45:24.773456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:34.123 [2024-07-15 19:45:24.773466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:34.123 [2024-07-15 19:45:24.773476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:34.123 [2024-07-15 19:45:24.773527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:34.123 [2024-07-15 19:45:24.773538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:34.123 [2024-07-15 19:45:24.773559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:34.123 [2024-07-15 19:45:24.773569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:34.123 [2024-07-15 19:45:24.773580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:34.123 [2024-07-15 19:45:24.773593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.123 [2024-07-15 19:45:24.773608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:34.123 [2024-07-15 19:45:24.773618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:23:34.123 [2024-07-15 19:45:24.773628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.826902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.826964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.124 [2024-07-15 19:45:24.826979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.224 ms 00:23:34.124 [2024-07-15 19:45:24.826989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.827078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.827089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.124 [2024-07-15 19:45:24.827101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:34.124 [2024-07-15 19:45:24.827110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.880173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.880214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.124 [2024-07-15 19:45:24.880229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.995 ms 00:23:34.124 [2024-07-15 19:45:24.880239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.880285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.880296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.124 [2024-07-15 19:45:24.880307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:34.124 [2024-07-15 19:45:24.880317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.880813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.880829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.124 [2024-07-15 19:45:24.880840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:23:34.124 [2024-07-15 19:45:24.880850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.880966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.880980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.124 [2024-07-15 19:45:24.880992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:34.124 [2024-07-15 19:45:24.881002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.124 [2024-07-15 19:45:24.902830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.124 [2024-07-15 19:45:24.902870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.124 [2024-07-15 19:45:24.902884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.805 ms 00:23:34.124 [2024-07-15 19:45:24.902895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:24.924608] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:34.382 [2024-07-15 19:45:24.924652] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:34.382 [2024-07-15 19:45:24.924669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:24.924680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:34.382 [2024-07-15 19:45:24.924692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.652 ms 00:23:34.382 [2024-07-15 19:45:24.924702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:24.956065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:24.956108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:34.382 [2024-07-15 19:45:24.956123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.300 ms 00:23:34.382 [2024-07-15 19:45:24.956140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:24.976142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:24.976197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:34.382 [2024-07-15 19:45:24.976212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.955 ms 00:23:34.382 [2024-07-15 19:45:24.976222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:24.995411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:24.995449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:34.382 [2024-07-15 19:45:24.995462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.149 ms 00:23:34.382 [2024-07-15 19:45:24.995472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:24.996313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:24.996344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.382 [2024-07-15 19:45:24.996356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:23:34.382 [2024-07-15 19:45:24.996367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:25.100181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:25.100242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:34.382 [2024-07-15 19:45:25.100258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.789 ms 00:23:34.382 [2024-07-15 19:45:25.100270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:25.113403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:34.382 [2024-07-15 19:45:25.117077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:25.117129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.382 [2024-07-15 19:45:25.117152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.739 ms 00:23:34.382 [2024-07-15 19:45:25.117169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:25.117313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:25.117335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:34.382 [2024-07-15 19:45:25.117353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:34.382 [2024-07-15 19:45:25.117370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:25.117501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:25.117525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.382 [2024-07-15 19:45:25.117544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:34.382 [2024-07-15 19:45:25.117561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.382 [2024-07-15 19:45:25.117600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.382 [2024-07-15 19:45:25.117618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.382 [2024-07-15 19:45:25.117637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:34.382 [2024-07-15 19:45:25.117653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.383 [2024-07-15 19:45:25.117707] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:34.383 [2024-07-15 19:45:25.117729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.383 [2024-07-15 19:45:25.117747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:34.383 [2024-07-15 19:45:25.117773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:34.383 [2024-07-15 19:45:25.117816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.383 [2024-07-15 19:45:25.160440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.383 [2024-07-15 19:45:25.160501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:34.383 [2024-07-15 19:45:25.160518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.577 ms 00:23:34.383 [2024-07-15 19:45:25.160529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.383 [2024-07-15 19:45:25.160621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.383 [2024-07-15 19:45:25.160646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:34.383 [2024-07-15 19:45:25.160657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:34.383 [2024-07-15 19:45:25.160668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.383 [2024-07-15 19:45:25.161951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 424.681 ms, result 0 00:24:06.940  Copying: 31/1024 [MB] (31 MBps) Copying: 62/1024 [MB] (31 MBps) Copying: 94/1024 [MB] (32 MBps) Copying: 126/1024 [MB] (31 MBps) Copying: 161/1024 [MB] (35 MBps) Copying: 194/1024 [MB] (33 MBps) Copying: 227/1024 [MB] (32 MBps) Copying: 260/1024 [MB] (32 MBps) Copying: 295/1024 [MB] (34 MBps) Copying: 329/1024 [MB] (34 MBps) Copying: 364/1024 [MB] (34 MBps) Copying: 394/1024 [MB] (30 MBps) Copying: 425/1024 [MB] (31 MBps) Copying: 455/1024 [MB] (29 MBps) Copying: 486/1024 [MB] (31 MBps) Copying: 522/1024 [MB] (36 MBps) Copying: 555/1024 [MB] (33 MBps) Copying: 587/1024 [MB] (31 MBps) Copying: 618/1024 [MB] (31 MBps) Copying: 650/1024 [MB] (31 MBps) Copying: 685/1024 [MB] (34 MBps) Copying: 717/1024 [MB] (32 MBps) Copying: 747/1024 [MB] (29 MBps) Copying: 778/1024 [MB] (31 MBps) Copying: 809/1024 [MB] (31 MBps) Copying: 840/1024 [MB] (30 MBps) Copying: 873/1024 [MB] (32 MBps) Copying: 907/1024 [MB] (33 MBps) Copying: 939/1024 [MB] (32 MBps) Copying: 972/1024 [MB] (32 MBps) Copying: 1004/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-15 19:45:57.704055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.940 [2024-07-15 19:45:57.704157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:06.940 [2024-07-15 19:45:57.704189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:06.940 [2024-07-15 19:45:57.704213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.940 [2024-07-15 19:45:57.704258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:06.940 [2024-07-15 19:45:57.711047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.940 [2024-07-15 19:45:57.711111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:06.940 [2024-07-15 19:45:57.711138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.754 ms 00:24:06.940 [2024-07-15 19:45:57.711162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.940 [2024-07-15 19:45:57.711520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.940 [2024-07-15 19:45:57.711551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:06.940 [2024-07-15 19:45:57.711576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:24:06.940 [2024-07-15 19:45:57.711598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.940 [2024-07-15 19:45:57.716941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.940 [2024-07-15 19:45:57.716988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:06.940 [2024-07-15 19:45:57.717013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.309 ms 00:24:06.940 [2024-07-15 19:45:57.717037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.940 [2024-07-15 19:45:57.727161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.940 [2024-07-15 19:45:57.727222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:06.940 [2024-07-15 19:45:57.727258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.084 ms 00:24:06.940 [2024-07-15 19:45:57.727281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.198 [2024-07-15 19:45:57.796682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.198 [2024-07-15 19:45:57.796757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:07.198 [2024-07-15 19:45:57.796792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.258 ms 00:24:07.198 [2024-07-15 19:45:57.796809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.198 [2024-07-15 19:45:57.829029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.198 [2024-07-15 19:45:57.829087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:07.198 [2024-07-15 19:45:57.829109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.141 ms 00:24:07.198 [2024-07-15 19:45:57.829125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.198 [2024-07-15 19:45:57.829324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.198 [2024-07-15 19:45:57.829345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:07.198 [2024-07-15 19:45:57.829362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:24:07.198 [2024-07-15 19:45:57.829383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.198 [2024-07-15 19:45:57.889655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.198 [2024-07-15 19:45:57.889706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:07.198 [2024-07-15 19:45:57.889727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.245 ms 00:24:07.198 [2024-07-15 19:45:57.889743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.198 [2024-07-15 19:45:57.946952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.198 [2024-07-15 19:45:57.947011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:07.198 [2024-07-15 19:45:57.947025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.100 ms 00:24:07.198 [2024-07-15 19:45:57.947035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.198 [2024-07-15 19:45:57.985587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.198 [2024-07-15 19:45:57.985631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:07.198 [2024-07-15 19:45:57.985661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.509 ms 00:24:07.198 [2024-07-15 19:45:57.985672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.460 [2024-07-15 19:45:58.025001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.460 [2024-07-15 19:45:58.025043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:07.460 [2024-07-15 19:45:58.025057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.248 ms 00:24:07.460 [2024-07-15 19:45:58.025067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.460 [2024-07-15 19:45:58.025105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:07.460 [2024-07-15 19:45:58.025123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:07.460 [2024-07-15 19:45:58.025698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.025998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:07.461 [2024-07-15 19:45:58.026258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:07.461 [2024-07-15 19:45:58.026268] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f3128da4-5021-4384-924c-f29450b8d9c2 00:24:07.461 [2024-07-15 19:45:58.026279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:07.461 [2024-07-15 19:45:58.026289] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:07.461 [2024-07-15 19:45:58.026304] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:07.461 [2024-07-15 19:45:58.026314] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:07.461 [2024-07-15 19:45:58.026324] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:07.461 [2024-07-15 19:45:58.026334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:07.461 [2024-07-15 19:45:58.026344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:07.461 [2024-07-15 19:45:58.026353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:07.461 [2024-07-15 19:45:58.026362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:07.461 [2024-07-15 19:45:58.026372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.461 [2024-07-15 19:45:58.026382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:07.461 [2024-07-15 19:45:58.026393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:24:07.461 [2024-07-15 19:45:58.026409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.046887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.461 [2024-07-15 19:45:58.046941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:07.461 [2024-07-15 19:45:58.046968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.436 ms 00:24:07.461 [2024-07-15 19:45:58.046979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.047465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.461 [2024-07-15 19:45:58.047476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:07.461 [2024-07-15 19:45:58.047487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:24:07.461 [2024-07-15 19:45:58.047496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.094369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.461 [2024-07-15 19:45:58.094422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.461 [2024-07-15 19:45:58.094436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.461 [2024-07-15 19:45:58.094448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.094510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.461 [2024-07-15 19:45:58.094521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.461 [2024-07-15 19:45:58.094531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.461 [2024-07-15 19:45:58.094541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.094614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.461 [2024-07-15 19:45:58.094627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.461 [2024-07-15 19:45:58.094637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.461 [2024-07-15 19:45:58.094647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.094664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.461 [2024-07-15 19:45:58.094674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.461 [2024-07-15 19:45:58.094684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.461 [2024-07-15 19:45:58.094694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.461 [2024-07-15 19:45:58.221612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.461 [2024-07-15 19:45:58.221673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.461 [2024-07-15 19:45:58.221688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.461 [2024-07-15 19:45:58.221700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.322979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.720 [2024-07-15 19:45:58.323045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.720 [2024-07-15 19:45:58.323147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.720 [2024-07-15 19:45:58.323214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.720 [2024-07-15 19:45:58.323373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:07.720 [2024-07-15 19:45:58.323439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.720 [2024-07-15 19:45:58.323506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:07.720 [2024-07-15 19:45:58.323573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.720 [2024-07-15 19:45:58.323582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:07.720 [2024-07-15 19:45:58.323592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.720 [2024-07-15 19:45:58.323711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 619.646 ms, result 0 00:24:09.095 00:24:09.095 00:24:09.095 19:45:59 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:10.994 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:10.994 19:46:01 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:10.994 [2024-07-15 19:46:01.531541] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:10.994 [2024-07-15 19:46:01.531681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83086 ] 00:24:10.994 [2024-07-15 19:46:01.699547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.252 [2024-07-15 19:46:01.989036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.820 [2024-07-15 19:46:02.388317] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.820 [2024-07-15 19:46:02.388391] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.820 [2024-07-15 19:46:02.549746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.549834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.820 [2024-07-15 19:46:02.549851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.820 [2024-07-15 19:46:02.549877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.549933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.549947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.820 [2024-07-15 19:46:02.549957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:11.820 [2024-07-15 19:46:02.549970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.549991] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.820 [2024-07-15 19:46:02.551154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.820 [2024-07-15 19:46:02.551181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.551195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.820 [2024-07-15 19:46:02.551206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:24:11.820 [2024-07-15 19:46:02.551215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.552618] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.820 [2024-07-15 19:46:02.572376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.572413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.820 [2024-07-15 19:46:02.572427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.758 ms 00:24:11.820 [2024-07-15 19:46:02.572438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.572519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.572532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.820 [2024-07-15 19:46:02.572620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:11.820 [2024-07-15 19:46:02.572630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.579255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.579284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.820 [2024-07-15 19:46:02.579297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.546 ms 00:24:11.820 [2024-07-15 19:46:02.579307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.579385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.579401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.820 [2024-07-15 19:46:02.579413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:11.820 [2024-07-15 19:46:02.579423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.579466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.579478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.820 [2024-07-15 19:46:02.579488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:11.820 [2024-07-15 19:46:02.579498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.579523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.820 [2024-07-15 19:46:02.585058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.585091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.820 [2024-07-15 19:46:02.585103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.541 ms 00:24:11.820 [2024-07-15 19:46:02.585113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.585149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.585161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.820 [2024-07-15 19:46:02.585171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:11.820 [2024-07-15 19:46:02.585181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.585231] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.820 [2024-07-15 19:46:02.585256] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.820 [2024-07-15 19:46:02.585291] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.820 [2024-07-15 19:46:02.585311] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:11.820 [2024-07-15 19:46:02.585396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.820 [2024-07-15 19:46:02.585409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.820 [2024-07-15 19:46:02.585422] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:11.820 [2024-07-15 19:46:02.585435] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.820 [2024-07-15 19:46:02.585446] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.820 [2024-07-15 19:46:02.585457] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.820 [2024-07-15 19:46:02.585467] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.820 [2024-07-15 19:46:02.585477] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.820 [2024-07-15 19:46:02.585487] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.820 [2024-07-15 19:46:02.585497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.585510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.820 [2024-07-15 19:46:02.585520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:11.820 [2024-07-15 19:46:02.585530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.585599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.820 [2024-07-15 19:46:02.585609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.820 [2024-07-15 19:46:02.585619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:11.820 [2024-07-15 19:46:02.585628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.820 [2024-07-15 19:46:02.585712] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.820 [2024-07-15 19:46:02.585724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.820 [2024-07-15 19:46:02.585738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.820 [2024-07-15 19:46:02.585748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.820 [2024-07-15 19:46:02.585759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.820 [2024-07-15 19:46:02.585768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.820 [2024-07-15 19:46:02.585794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.820 [2024-07-15 19:46:02.585805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.820 [2024-07-15 19:46:02.585816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.820 [2024-07-15 19:46:02.585825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.820 [2024-07-15 19:46:02.585835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.820 [2024-07-15 19:46:02.585844] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.821 [2024-07-15 19:46:02.585854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.821 [2024-07-15 19:46:02.585863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.821 [2024-07-15 19:46:02.585873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:11.821 [2024-07-15 19:46:02.585899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.821 [2024-07-15 19:46:02.585908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.821 [2024-07-15 19:46:02.585918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:11.821 [2024-07-15 19:46:02.585928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.821 [2024-07-15 19:46:02.585937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.821 [2024-07-15 19:46:02.585957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.821 [2024-07-15 19:46:02.585967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.821 [2024-07-15 19:46:02.585976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.821 [2024-07-15 19:46:02.585986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.821 [2024-07-15 19:46:02.585995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.821 [2024-07-15 19:46:02.586004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.821 [2024-07-15 19:46:02.586014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.821 [2024-07-15 19:46:02.586024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.821 [2024-07-15 19:46:02.586033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.821 [2024-07-15 19:46:02.586049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:11.821 [2024-07-15 19:46:02.586058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.821 [2024-07-15 19:46:02.586067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.821 [2024-07-15 19:46:02.586077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:11.821 [2024-07-15 19:46:02.586086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.821 [2024-07-15 19:46:02.586095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.821 [2024-07-15 19:46:02.586104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:11.821 [2024-07-15 19:46:02.586113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.821 [2024-07-15 19:46:02.586122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.821 [2024-07-15 19:46:02.586131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:11.821 [2024-07-15 19:46:02.586140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.821 [2024-07-15 19:46:02.586150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.821 [2024-07-15 19:46:02.586159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:11.821 [2024-07-15 19:46:02.586168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.821 [2024-07-15 19:46:02.586177] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.821 [2024-07-15 19:46:02.586187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.821 [2024-07-15 19:46:02.586196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.821 [2024-07-15 19:46:02.586205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.821 [2024-07-15 19:46:02.586215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.821 [2024-07-15 19:46:02.586224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.821 [2024-07-15 19:46:02.586233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.821 [2024-07-15 19:46:02.586242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.821 [2024-07-15 19:46:02.586251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.821 [2024-07-15 19:46:02.586261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.821 [2024-07-15 19:46:02.586271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.821 [2024-07-15 19:46:02.586283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.821 [2024-07-15 19:46:02.586304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:11.821 [2024-07-15 19:46:02.586315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:11.821 [2024-07-15 19:46:02.586324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:11.821 [2024-07-15 19:46:02.586335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:11.821 [2024-07-15 19:46:02.586345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:11.821 [2024-07-15 19:46:02.586356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:11.821 [2024-07-15 19:46:02.586366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:11.821 [2024-07-15 19:46:02.586376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:11.821 [2024-07-15 19:46:02.586386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:11.821 [2024-07-15 19:46:02.586446] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.821 [2024-07-15 19:46:02.586457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.821 [2024-07-15 19:46:02.586481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.821 [2024-07-15 19:46:02.586492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.821 [2024-07-15 19:46:02.586503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.821 [2024-07-15 19:46:02.586514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.821 [2024-07-15 19:46:02.586527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.821 [2024-07-15 19:46:02.586537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:24:11.821 [2024-07-15 19:46:02.586547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.642933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.642993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.080 [2024-07-15 19:46:02.643010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.338 ms 00:24:12.080 [2024-07-15 19:46:02.643023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.643125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.643139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:12.080 [2024-07-15 19:46:02.643151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:12.080 [2024-07-15 19:46:02.643162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.696435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.696489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.080 [2024-07-15 19:46:02.696505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.193 ms 00:24:12.080 [2024-07-15 19:46:02.696515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.696571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.696583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.080 [2024-07-15 19:46:02.696594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:12.080 [2024-07-15 19:46:02.696603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.697097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.697113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.080 [2024-07-15 19:46:02.697124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:24:12.080 [2024-07-15 19:46:02.697134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.697251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.697265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.080 [2024-07-15 19:46:02.697275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:24:12.080 [2024-07-15 19:46:02.697285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.718267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.718311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.080 [2024-07-15 19:46:02.718325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.961 ms 00:24:12.080 [2024-07-15 19:46:02.718336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.740931] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:12.080 [2024-07-15 19:46:02.740975] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:12.080 [2024-07-15 19:46:02.740990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.741002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:12.080 [2024-07-15 19:46:02.741014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.494 ms 00:24:12.080 [2024-07-15 19:46:02.741024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.772987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.773032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:12.080 [2024-07-15 19:46:02.773047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.910 ms 00:24:12.080 [2024-07-15 19:46:02.773063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.792235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.792274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:12.080 [2024-07-15 19:46:02.792287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.126 ms 00:24:12.080 [2024-07-15 19:46:02.792297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.811517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.811554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:12.080 [2024-07-15 19:46:02.811580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.181 ms 00:24:12.080 [2024-07-15 19:46:02.811605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.080 [2024-07-15 19:46:02.812494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.080 [2024-07-15 19:46:02.812525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:12.080 [2024-07-15 19:46:02.812538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:24:12.080 [2024-07-15 19:46:02.812548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.906168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.906235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:12.339 [2024-07-15 19:46:02.906253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.595 ms 00:24:12.339 [2024-07-15 19:46:02.906264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.919271] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:12.339 [2024-07-15 19:46:02.922468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.922500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:12.339 [2024-07-15 19:46:02.922515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.130 ms 00:24:12.339 [2024-07-15 19:46:02.922525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.922627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.922640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:12.339 [2024-07-15 19:46:02.922652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:12.339 [2024-07-15 19:46:02.922662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.922733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.922748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:12.339 [2024-07-15 19:46:02.922759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:12.339 [2024-07-15 19:46:02.922769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.922805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.922816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:12.339 [2024-07-15 19:46:02.922826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.339 [2024-07-15 19:46:02.922837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.922871] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:12.339 [2024-07-15 19:46:02.922883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.922892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:12.339 [2024-07-15 19:46:02.922906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:12.339 [2024-07-15 19:46:02.922916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.965299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.965383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:12.339 [2024-07-15 19:46:02.965401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.362 ms 00:24:12.339 [2024-07-15 19:46:02.965411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.965514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.339 [2024-07-15 19:46:02.965538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:12.339 [2024-07-15 19:46:02.965550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:12.339 [2024-07-15 19:46:02.965560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.339 [2024-07-15 19:46:02.966985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.684 ms, result 0 00:24:46.475  Copying: 31/1024 [MB] (31 MBps) Copying: 64/1024 [MB] (32 MBps) Copying: 96/1024 [MB] (31 MBps) Copying: 127/1024 [MB] (31 MBps) Copying: 159/1024 [MB] (31 MBps) Copying: 191/1024 [MB] (31 MBps) Copying: 222/1024 [MB] (31 MBps) Copying: 253/1024 [MB] (31 MBps) Copying: 283/1024 [MB] (29 MBps) Copying: 314/1024 [MB] (31 MBps) Copying: 345/1024 [MB] (31 MBps) Copying: 375/1024 [MB] (29 MBps) Copying: 406/1024 [MB] (31 MBps) Copying: 438/1024 [MB] (31 MBps) Copying: 468/1024 [MB] (30 MBps) Copying: 499/1024 [MB] (30 MBps) Copying: 529/1024 [MB] (30 MBps) Copying: 560/1024 [MB] (31 MBps) Copying: 592/1024 [MB] (31 MBps) Copying: 622/1024 [MB] (30 MBps) Copying: 653/1024 [MB] (31 MBps) Copying: 684/1024 [MB] (30 MBps) Copying: 715/1024 [MB] (30 MBps) Copying: 746/1024 [MB] (31 MBps) Copying: 775/1024 [MB] (29 MBps) Copying: 806/1024 [MB] (30 MBps) Copying: 836/1024 [MB] (30 MBps) Copying: 866/1024 [MB] (30 MBps) Copying: 896/1024 [MB] (30 MBps) Copying: 924/1024 [MB] (28 MBps) Copying: 955/1024 [MB] (30 MBps) Copying: 986/1024 [MB] (30 MBps) Copying: 1017/1024 [MB] (30 MBps) Copying: 1048420/1048576 [kB] (6832 kBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-15 19:46:37.138800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.138883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.475 [2024-07-15 19:46:37.138901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.475 [2024-07-15 19:46:37.138914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.475 [2024-07-15 19:46:37.142375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.475 [2024-07-15 19:46:37.147357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.147408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.475 [2024-07-15 19:46:37.147439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.919 ms 00:24:46.475 [2024-07-15 19:46:37.147450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.475 [2024-07-15 19:46:37.159212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.159254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.475 [2024-07-15 19:46:37.159270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.047 ms 00:24:46.475 [2024-07-15 19:46:37.159281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.475 [2024-07-15 19:46:37.180512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.180557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:46.475 [2024-07-15 19:46:37.180580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.212 ms 00:24:46.475 [2024-07-15 19:46:37.180603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.475 [2024-07-15 19:46:37.186400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.186447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:46.475 [2024-07-15 19:46:37.186460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.759 ms 00:24:46.475 [2024-07-15 19:46:37.186471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.475 [2024-07-15 19:46:37.228678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.228720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:46.475 [2024-07-15 19:46:37.228735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.147 ms 00:24:46.475 [2024-07-15 19:46:37.228746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.475 [2024-07-15 19:46:37.252733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.475 [2024-07-15 19:46:37.252801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:46.475 [2024-07-15 19:46:37.252827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.928 ms 00:24:46.475 [2024-07-15 19:46:37.252843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.734 [2024-07-15 19:46:37.348455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.734 [2024-07-15 19:46:37.348525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:46.734 [2024-07-15 19:46:37.348542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.564 ms 00:24:46.734 [2024-07-15 19:46:37.348554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.734 [2024-07-15 19:46:37.394084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.734 [2024-07-15 19:46:37.394139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:46.734 [2024-07-15 19:46:37.394156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.507 ms 00:24:46.734 [2024-07-15 19:46:37.394167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.734 [2024-07-15 19:46:37.439661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.734 [2024-07-15 19:46:37.439715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:46.734 [2024-07-15 19:46:37.439731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.447 ms 00:24:46.734 [2024-07-15 19:46:37.439743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.734 [2024-07-15 19:46:37.485304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.734 [2024-07-15 19:46:37.485387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:46.734 [2024-07-15 19:46:37.485418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.488 ms 00:24:46.734 [2024-07-15 19:46:37.485429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.994 [2024-07-15 19:46:37.530763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.994 [2024-07-15 19:46:37.530821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:46.994 [2024-07-15 19:46:37.530837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.248 ms 00:24:46.994 [2024-07-15 19:46:37.530849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.994 [2024-07-15 19:46:37.530896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:46.994 [2024-07-15 19:46:37.530925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116224 / 261120 wr_cnt: 1 state: open 00:24:46.994 [2024-07-15 19:46:37.530945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.530967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.530988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:46.994 [2024-07-15 19:46:37.531552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.531983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:46.995 [2024-07-15 19:46:37.532960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:46.995 [2024-07-15 19:46:37.532972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f3128da4-5021-4384-924c-f29450b8d9c2 00:24:46.995 [2024-07-15 19:46:37.532984] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116224 00:24:46.995 [2024-07-15 19:46:37.532996] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117184 00:24:46.995 [2024-07-15 19:46:37.533007] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116224 00:24:46.995 [2024-07-15 19:46:37.533019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:24:46.995 [2024-07-15 19:46:37.533030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:46.995 [2024-07-15 19:46:37.533047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:46.995 [2024-07-15 19:46:37.533059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:46.995 [2024-07-15 19:46:37.533069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:46.995 [2024-07-15 19:46:37.533079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:46.995 [2024-07-15 19:46:37.533092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.995 [2024-07-15 19:46:37.533107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:46.995 [2024-07-15 19:46:37.533119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:24:46.995 [2024-07-15 19:46:37.533131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.995 [2024-07-15 19:46:37.556256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.995 [2024-07-15 19:46:37.556295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:46.995 [2024-07-15 19:46:37.556322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.078 ms 00:24:46.995 [2024-07-15 19:46:37.556333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.995 [2024-07-15 19:46:37.556946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.995 [2024-07-15 19:46:37.556961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:46.995 [2024-07-15 19:46:37.556972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:24:46.995 [2024-07-15 19:46:37.556982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.995 [2024-07-15 19:46:37.606290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.995 [2024-07-15 19:46:37.606331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.995 [2024-07-15 19:46:37.606346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.995 [2024-07-15 19:46:37.606357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.995 [2024-07-15 19:46:37.606430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.995 [2024-07-15 19:46:37.606442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.995 [2024-07-15 19:46:37.606453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.996 [2024-07-15 19:46:37.606464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.996 [2024-07-15 19:46:37.606532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.996 [2024-07-15 19:46:37.606547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.996 [2024-07-15 19:46:37.606558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.996 [2024-07-15 19:46:37.606569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.996 [2024-07-15 19:46:37.606592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.996 [2024-07-15 19:46:37.606603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.996 [2024-07-15 19:46:37.606613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.996 [2024-07-15 19:46:37.606624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.996 [2024-07-15 19:46:37.742915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.996 [2024-07-15 19:46:37.742978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.996 [2024-07-15 19:46:37.742994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.996 [2024-07-15 19:46:37.743006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.862441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.862491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.255 [2024-07-15 19:46:37.862508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.862522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.862614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.862630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.255 [2024-07-15 19:46:37.862644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.862659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.862722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.862745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.255 [2024-07-15 19:46:37.862760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.862775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.862934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.862955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.255 [2024-07-15 19:46:37.862968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.862980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.863022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.863036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.255 [2024-07-15 19:46:37.863053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.863065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.863103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.863115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.255 [2024-07-15 19:46:37.863127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.863138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.863183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.255 [2024-07-15 19:46:37.863199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.255 [2024-07-15 19:46:37.863211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.255 [2024-07-15 19:46:37.863222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.255 [2024-07-15 19:46:37.863355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 726.628 ms, result 0 00:24:49.157 00:24:49.157 00:24:49.157 19:46:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:49.157 [2024-07-15 19:46:39.588951] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:49.157 [2024-07-15 19:46:39.589077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83471 ] 00:24:49.157 [2024-07-15 19:46:39.753048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.415 [2024-07-15 19:46:39.991599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.674 [2024-07-15 19:46:40.406958] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.674 [2024-07-15 19:46:40.407036] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.933 [2024-07-15 19:46:40.574465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.933 [2024-07-15 19:46:40.574524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.933 [2024-07-15 19:46:40.574544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.934 [2024-07-15 19:46:40.574558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.574624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.574642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.934 [2024-07-15 19:46:40.574656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:49.934 [2024-07-15 19:46:40.574673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.574702] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.934 [2024-07-15 19:46:40.575894] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.934 [2024-07-15 19:46:40.575935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.575951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.934 [2024-07-15 19:46:40.575963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:24:49.934 [2024-07-15 19:46:40.575974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.577424] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.934 [2024-07-15 19:46:40.598676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.598722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.934 [2024-07-15 19:46:40.598739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.250 ms 00:24:49.934 [2024-07-15 19:46:40.598752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.598839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.598874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.934 [2024-07-15 19:46:40.598891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:49.934 [2024-07-15 19:46:40.598904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.605986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.606018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.934 [2024-07-15 19:46:40.606030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.000 ms 00:24:49.934 [2024-07-15 19:46:40.606042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.606145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.606165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.934 [2024-07-15 19:46:40.606178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:49.934 [2024-07-15 19:46:40.606190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.606248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.606261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.934 [2024-07-15 19:46:40.606273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:49.934 [2024-07-15 19:46:40.606283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.606310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.934 [2024-07-15 19:46:40.612858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.612889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.934 [2024-07-15 19:46:40.612901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.554 ms 00:24:49.934 [2024-07-15 19:46:40.612910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.612948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.612959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.934 [2024-07-15 19:46:40.612969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:49.934 [2024-07-15 19:46:40.612996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.613049] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.934 [2024-07-15 19:46:40.613093] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:49.934 [2024-07-15 19:46:40.613132] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.934 [2024-07-15 19:46:40.613155] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:49.934 [2024-07-15 19:46:40.613252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.934 [2024-07-15 19:46:40.613267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.934 [2024-07-15 19:46:40.613282] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:49.934 [2024-07-15 19:46:40.613297] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613312] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613325] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.934 [2024-07-15 19:46:40.613336] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.934 [2024-07-15 19:46:40.613348] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.934 [2024-07-15 19:46:40.613359] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.934 [2024-07-15 19:46:40.613371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.613386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.934 [2024-07-15 19:46:40.613398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:49.934 [2024-07-15 19:46:40.613409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.613492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.934 [2024-07-15 19:46:40.613515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.934 [2024-07-15 19:46:40.613527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:49.934 [2024-07-15 19:46:40.613538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.934 [2024-07-15 19:46:40.613636] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.934 [2024-07-15 19:46:40.613649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.934 [2024-07-15 19:46:40.613666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.934 [2024-07-15 19:46:40.613701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.934 [2024-07-15 19:46:40.613734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.934 [2024-07-15 19:46:40.613756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.934 [2024-07-15 19:46:40.613768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.934 [2024-07-15 19:46:40.613779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.934 [2024-07-15 19:46:40.613790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.934 [2024-07-15 19:46:40.613802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.934 [2024-07-15 19:46:40.613833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.934 [2024-07-15 19:46:40.613856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.934 [2024-07-15 19:46:40.613899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.934 [2024-07-15 19:46:40.613932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.934 [2024-07-15 19:46:40.613970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.934 [2024-07-15 19:46:40.613980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.934 [2024-07-15 19:46:40.613991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.934 [2024-07-15 19:46:40.614001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.934 [2024-07-15 19:46:40.614012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.934 [2024-07-15 19:46:40.614023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.934 [2024-07-15 19:46:40.614034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.934 [2024-07-15 19:46:40.614044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.934 [2024-07-15 19:46:40.614055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.934 [2024-07-15 19:46:40.614065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.934 [2024-07-15 19:46:40.614076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.934 [2024-07-15 19:46:40.614086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.934 [2024-07-15 19:46:40.614097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.934 [2024-07-15 19:46:40.614107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.614118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.934 [2024-07-15 19:46:40.614129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.934 [2024-07-15 19:46:40.614139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.614150] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.934 [2024-07-15 19:46:40.614161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.934 [2024-07-15 19:46:40.614183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.934 [2024-07-15 19:46:40.614194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.934 [2024-07-15 19:46:40.614205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.934 [2024-07-15 19:46:40.614231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.934 [2024-07-15 19:46:40.614242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.934 [2024-07-15 19:46:40.614253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.934 [2024-07-15 19:46:40.614263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.935 [2024-07-15 19:46:40.614274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.935 [2024-07-15 19:46:40.614286] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.935 [2024-07-15 19:46:40.614300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.935 [2024-07-15 19:46:40.614325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.935 [2024-07-15 19:46:40.614337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.935 [2024-07-15 19:46:40.614349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.935 [2024-07-15 19:46:40.614361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.935 [2024-07-15 19:46:40.614373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.935 [2024-07-15 19:46:40.614385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.935 [2024-07-15 19:46:40.614397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.935 [2024-07-15 19:46:40.614410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.935 [2024-07-15 19:46:40.614433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.935 [2024-07-15 19:46:40.614493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.935 [2024-07-15 19:46:40.614506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.935 [2024-07-15 19:46:40.614531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.935 [2024-07-15 19:46:40.614543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.935 [2024-07-15 19:46:40.614555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.935 [2024-07-15 19:46:40.614568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.935 [2024-07-15 19:46:40.614584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.935 [2024-07-15 19:46:40.614595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:24:49.935 [2024-07-15 19:46:40.614607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.935 [2024-07-15 19:46:40.672313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.935 [2024-07-15 19:46:40.672393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.935 [2024-07-15 19:46:40.672410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.648 ms 00:24:49.935 [2024-07-15 19:46:40.672421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.935 [2024-07-15 19:46:40.672527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.935 [2024-07-15 19:46:40.672538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.935 [2024-07-15 19:46:40.672550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:49.935 [2024-07-15 19:46:40.672560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.732381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.732432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.197 [2024-07-15 19:46:40.732448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.728 ms 00:24:50.197 [2024-07-15 19:46:40.732459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.732516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.732527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.197 [2024-07-15 19:46:40.732540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:50.197 [2024-07-15 19:46:40.732560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.733088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.733105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.197 [2024-07-15 19:46:40.733116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:24:50.197 [2024-07-15 19:46:40.733127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.733255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.733269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.197 [2024-07-15 19:46:40.733280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:50.197 [2024-07-15 19:46:40.733291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.757003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.757046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.197 [2024-07-15 19:46:40.757060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.689 ms 00:24:50.197 [2024-07-15 19:46:40.757088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.780522] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:50.197 [2024-07-15 19:46:40.780565] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:50.197 [2024-07-15 19:46:40.780581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.780591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:50.197 [2024-07-15 19:46:40.780620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.355 ms 00:24:50.197 [2024-07-15 19:46:40.780630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.816021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.816065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:50.197 [2024-07-15 19:46:40.816081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.345 ms 00:24:50.197 [2024-07-15 19:46:40.816099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.838790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.838831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:50.197 [2024-07-15 19:46:40.838846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.645 ms 00:24:50.197 [2024-07-15 19:46:40.838857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.860718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.860758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:50.197 [2024-07-15 19:46:40.860772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.817 ms 00:24:50.197 [2024-07-15 19:46:40.860792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.861824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.861874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:50.197 [2024-07-15 19:46:40.861889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:24:50.197 [2024-07-15 19:46:40.861901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.963229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.963287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:50.197 [2024-07-15 19:46:40.963305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.302 ms 00:24:50.197 [2024-07-15 19:46:40.963317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.197 [2024-07-15 19:46:40.976963] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:50.197 [2024-07-15 19:46:40.980335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.197 [2024-07-15 19:46:40.980370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:50.197 [2024-07-15 19:46:40.980385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.941 ms 00:24:50.197 [2024-07-15 19:46:40.980396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.198 [2024-07-15 19:46:40.980500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.198 [2024-07-15 19:46:40.980514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:50.198 [2024-07-15 19:46:40.980526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:50.198 [2024-07-15 19:46:40.980537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.198 [2024-07-15 19:46:40.982288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.198 [2024-07-15 19:46:40.982331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:50.198 [2024-07-15 19:46:40.982344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:24:50.198 [2024-07-15 19:46:40.982354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.198 [2024-07-15 19:46:40.982385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.198 [2024-07-15 19:46:40.982397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:50.198 [2024-07-15 19:46:40.982409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:50.198 [2024-07-15 19:46:40.982429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.198 [2024-07-15 19:46:40.982487] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:50.198 [2024-07-15 19:46:40.982500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.198 [2024-07-15 19:46:40.982512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:50.198 [2024-07-15 19:46:40.982528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:50.198 [2024-07-15 19:46:40.982539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.456 [2024-07-15 19:46:41.026078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.456 [2024-07-15 19:46:41.026118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:50.456 [2024-07-15 19:46:41.026132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.513 ms 00:24:50.456 [2024-07-15 19:46:41.026143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.456 [2024-07-15 19:46:41.026214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.456 [2024-07-15 19:46:41.026235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:50.456 [2024-07-15 19:46:41.026246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:50.456 [2024-07-15 19:46:41.026257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.456 [2024-07-15 19:46:41.032344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 456.422 ms, result 0 00:25:23.086  Copying: 30/1024 [MB] (30 MBps) Copying: 63/1024 [MB] (32 MBps) Copying: 93/1024 [MB] (30 MBps) Copying: 123/1024 [MB] (29 MBps) Copying: 155/1024 [MB] (32 MBps) Copying: 188/1024 [MB] (32 MBps) Copying: 221/1024 [MB] (33 MBps) Copying: 252/1024 [MB] (31 MBps) Copying: 285/1024 [MB] (32 MBps) Copying: 319/1024 [MB] (33 MBps) Copying: 352/1024 [MB] (32 MBps) Copying: 384/1024 [MB] (32 MBps) Copying: 416/1024 [MB] (31 MBps) Copying: 449/1024 [MB] (32 MBps) Copying: 482/1024 [MB] (33 MBps) Copying: 514/1024 [MB] (32 MBps) Copying: 545/1024 [MB] (31 MBps) Copying: 576/1024 [MB] (30 MBps) Copying: 608/1024 [MB] (31 MBps) Copying: 640/1024 [MB] (32 MBps) Copying: 673/1024 [MB] (33 MBps) Copying: 706/1024 [MB] (32 MBps) Copying: 738/1024 [MB] (32 MBps) Copying: 770/1024 [MB] (32 MBps) Copying: 804/1024 [MB] (33 MBps) Copying: 838/1024 [MB] (33 MBps) Copying: 871/1024 [MB] (33 MBps) Copying: 903/1024 [MB] (32 MBps) Copying: 935/1024 [MB] (32 MBps) Copying: 967/1024 [MB] (31 MBps) Copying: 998/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-15 19:47:13.714852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.086 [2024-07-15 19:47:13.714938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:23.086 [2024-07-15 19:47:13.714956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:23.086 [2024-07-15 19:47:13.714967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.086 [2024-07-15 19:47:13.714991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:23.086 [2024-07-15 19:47:13.719415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.086 [2024-07-15 19:47:13.719454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:23.086 [2024-07-15 19:47:13.719469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.405 ms 00:25:23.086 [2024-07-15 19:47:13.719480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.086 [2024-07-15 19:47:13.719697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.086 [2024-07-15 19:47:13.719715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:23.086 [2024-07-15 19:47:13.719727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:25:23.086 [2024-07-15 19:47:13.719738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.086 [2024-07-15 19:47:13.724280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.087 [2024-07-15 19:47:13.724327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:23.087 [2024-07-15 19:47:13.724349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:25:23.087 [2024-07-15 19:47:13.724361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.087 [2024-07-15 19:47:13.729975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.087 [2024-07-15 19:47:13.730013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:23.087 [2024-07-15 19:47:13.730026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.572 ms 00:25:23.087 [2024-07-15 19:47:13.730035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.087 [2024-07-15 19:47:13.773629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.087 [2024-07-15 19:47:13.773678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:23.087 [2024-07-15 19:47:13.773708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.522 ms 00:25:23.087 [2024-07-15 19:47:13.773718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.087 [2024-07-15 19:47:13.795669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.087 [2024-07-15 19:47:13.795709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.087 [2024-07-15 19:47:13.795723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.907 ms 00:25:23.087 [2024-07-15 19:47:13.795755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.347 [2024-07-15 19:47:13.901259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.347 [2024-07-15 19:47:13.901301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.347 [2024-07-15 19:47:13.901316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.450 ms 00:25:23.347 [2024-07-15 19:47:13.901327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.347 [2024-07-15 19:47:13.940127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.347 [2024-07-15 19:47:13.940163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:23.347 [2024-07-15 19:47:13.940176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.784 ms 00:25:23.347 [2024-07-15 19:47:13.940186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.347 [2024-07-15 19:47:13.978907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.347 [2024-07-15 19:47:13.978943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:23.347 [2024-07-15 19:47:13.978956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.667 ms 00:25:23.347 [2024-07-15 19:47:13.978966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.348 [2024-07-15 19:47:14.016534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.348 [2024-07-15 19:47:14.016571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:23.348 [2024-07-15 19:47:14.016584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.517 ms 00:25:23.348 [2024-07-15 19:47:14.016622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.348 [2024-07-15 19:47:14.054556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.348 [2024-07-15 19:47:14.054593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:23.348 [2024-07-15 19:47:14.054606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.862 ms 00:25:23.348 [2024-07-15 19:47:14.054615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.348 [2024-07-15 19:47:14.054650] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:23.348 [2024-07-15 19:47:14.054665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:25:23.348 [2024-07-15 19:47:14.054679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.054998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:23.348 [2024-07-15 19:47:14.055612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:23.349 [2024-07-15 19:47:14.055745] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:23.349 [2024-07-15 19:47:14.055755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f3128da4-5021-4384-924c-f29450b8d9c2 00:25:23.349 [2024-07-15 19:47:14.055766] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:25:23.349 [2024-07-15 19:47:14.055775] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18368 00:25:23.349 [2024-07-15 19:47:14.055793] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17408 00:25:23.349 [2024-07-15 19:47:14.055804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0551 00:25:23.349 [2024-07-15 19:47:14.055814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:23.349 [2024-07-15 19:47:14.055830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:23.349 [2024-07-15 19:47:14.055840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:23.349 [2024-07-15 19:47:14.055849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:23.349 [2024-07-15 19:47:14.055858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:23.349 [2024-07-15 19:47:14.055868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.349 [2024-07-15 19:47:14.055877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:23.349 [2024-07-15 19:47:14.055890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:25:23.349 [2024-07-15 19:47:14.055901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.349 [2024-07-15 19:47:14.076333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.349 [2024-07-15 19:47:14.076368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:23.349 [2024-07-15 19:47:14.076380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.401 ms 00:25:23.349 [2024-07-15 19:47:14.076417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.349 [2024-07-15 19:47:14.076945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.349 [2024-07-15 19:47:14.076962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:23.349 [2024-07-15 19:47:14.076973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:25:23.349 [2024-07-15 19:47:14.076983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.349 [2024-07-15 19:47:14.121555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.349 [2024-07-15 19:47:14.121591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.349 [2024-07-15 19:47:14.121603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.349 [2024-07-15 19:47:14.121614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.349 [2024-07-15 19:47:14.121667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.349 [2024-07-15 19:47:14.121678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.349 [2024-07-15 19:47:14.121688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.349 [2024-07-15 19:47:14.121697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.349 [2024-07-15 19:47:14.121751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.349 [2024-07-15 19:47:14.121764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.349 [2024-07-15 19:47:14.121774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.349 [2024-07-15 19:47:14.121794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.349 [2024-07-15 19:47:14.121814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.349 [2024-07-15 19:47:14.121824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.349 [2024-07-15 19:47:14.121851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.349 [2024-07-15 19:47:14.121861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.609 [2024-07-15 19:47:14.242720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.609 [2024-07-15 19:47:14.242785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.609 [2024-07-15 19:47:14.242815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.609 [2024-07-15 19:47:14.242826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.343997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.610 [2024-07-15 19:47:14.344085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.610 [2024-07-15 19:47:14.344198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.610 [2024-07-15 19:47:14.344273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.610 [2024-07-15 19:47:14.344417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.610 [2024-07-15 19:47:14.344488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.610 [2024-07-15 19:47:14.344556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.610 [2024-07-15 19:47:14.344617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.610 [2024-07-15 19:47:14.344631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.610 [2024-07-15 19:47:14.344640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.610 [2024-07-15 19:47:14.344754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 629.871 ms, result 0 00:25:24.983 00:25:24.983 00:25:24.983 19:47:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:26.912 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 82109 00:25:26.912 19:47:17 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 82109 ']' 00:25:26.912 19:47:17 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 82109 00:25:26.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82109) - No such process 00:25:26.912 Process with pid 82109 is not found 00:25:26.912 Remove shared memory files 00:25:26.912 19:47:17 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 82109 is not found' 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:26.912 19:47:17 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:26.912 00:25:26.912 real 2m48.726s 00:25:26.912 user 2m35.356s 00:25:26.912 sys 0m14.655s 00:25:26.912 19:47:17 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:26.912 ************************************ 00:25:26.912 END TEST ftl_restore 00:25:26.912 ************************************ 00:25:26.912 19:47:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:26.912 19:47:17 ftl -- common/autotest_common.sh@1142 -- # return 0 00:25:26.912 19:47:17 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:26.912 19:47:17 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:26.912 19:47:17 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.912 19:47:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:26.912 ************************************ 00:25:26.912 START TEST ftl_dirty_shutdown 00:25:26.912 ************************************ 00:25:26.912 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:27.213 * Looking for test storage... 00:25:27.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.213 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83904 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83904 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83904 ']' 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:27.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.214 19:47:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:27.214 [2024-07-15 19:47:17.819744] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:27.214 [2024-07-15 19:47:17.820495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83904 ] 00:25:27.214 [2024-07-15 19:47:17.985272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.473 [2024-07-15 19:47:18.261998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:28.410 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:28.977 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:29.234 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:29.234 { 00:25:29.234 "name": "nvme0n1", 00:25:29.234 "aliases": [ 00:25:29.234 "aa49bb29-dc25-44fd-8c3e-2a3e30a070e6" 00:25:29.234 ], 00:25:29.234 "product_name": "NVMe disk", 00:25:29.234 "block_size": 4096, 00:25:29.234 "num_blocks": 1310720, 00:25:29.234 "uuid": "aa49bb29-dc25-44fd-8c3e-2a3e30a070e6", 00:25:29.234 "assigned_rate_limits": { 00:25:29.234 "rw_ios_per_sec": 0, 00:25:29.234 "rw_mbytes_per_sec": 0, 00:25:29.234 "r_mbytes_per_sec": 0, 00:25:29.234 "w_mbytes_per_sec": 0 00:25:29.234 }, 00:25:29.234 "claimed": true, 00:25:29.234 "claim_type": "read_many_write_one", 00:25:29.234 "zoned": false, 00:25:29.234 "supported_io_types": { 00:25:29.234 "read": true, 00:25:29.234 "write": true, 00:25:29.234 "unmap": true, 00:25:29.235 "flush": true, 00:25:29.235 "reset": true, 00:25:29.235 "nvme_admin": true, 00:25:29.235 "nvme_io": true, 00:25:29.235 "nvme_io_md": false, 00:25:29.235 "write_zeroes": true, 00:25:29.235 "zcopy": false, 00:25:29.235 "get_zone_info": false, 00:25:29.235 "zone_management": false, 00:25:29.235 "zone_append": false, 00:25:29.235 "compare": true, 00:25:29.235 "compare_and_write": false, 00:25:29.235 "abort": true, 00:25:29.235 "seek_hole": false, 00:25:29.235 "seek_data": false, 00:25:29.235 "copy": true, 00:25:29.235 "nvme_iov_md": false 00:25:29.235 }, 00:25:29.235 "driver_specific": { 00:25:29.235 "nvme": [ 00:25:29.235 { 00:25:29.235 "pci_address": "0000:00:11.0", 00:25:29.235 "trid": { 00:25:29.235 "trtype": "PCIe", 00:25:29.235 "traddr": "0000:00:11.0" 00:25:29.235 }, 00:25:29.235 "ctrlr_data": { 00:25:29.235 "cntlid": 0, 00:25:29.235 "vendor_id": "0x1b36", 00:25:29.235 "model_number": "QEMU NVMe Ctrl", 00:25:29.235 "serial_number": "12341", 00:25:29.235 "firmware_revision": "8.0.0", 00:25:29.235 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:29.235 "oacs": { 00:25:29.235 "security": 0, 00:25:29.235 "format": 1, 00:25:29.235 "firmware": 0, 00:25:29.235 "ns_manage": 1 00:25:29.235 }, 00:25:29.235 "multi_ctrlr": false, 00:25:29.235 "ana_reporting": false 00:25:29.235 }, 00:25:29.235 "vs": { 00:25:29.235 "nvme_version": "1.4" 00:25:29.235 }, 00:25:29.235 "ns_data": { 00:25:29.235 "id": 1, 00:25:29.235 "can_share": false 00:25:29.235 } 00:25:29.235 } 00:25:29.235 ], 00:25:29.235 "mp_policy": "active_passive" 00:25:29.235 } 00:25:29.235 } 00:25:29.235 ]' 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:29.235 19:47:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:29.493 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ea6fff8a-dccc-41e4-acf1-1f9378016a84 00:25:29.493 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:29.493 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea6fff8a-dccc-41e4-acf1-1f9378016a84 00:25:29.752 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ed23df3e-48b2-4522-bc4b-e562f768ad13 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ed23df3e-48b2-4522-bc4b-e562f768ad13 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:30.009 19:47:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.268 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:30.268 { 00:25:30.268 "name": "7ddf2ba7-06b5-4f91-a573-499237ad8c86", 00:25:30.268 "aliases": [ 00:25:30.268 "lvs/nvme0n1p0" 00:25:30.268 ], 00:25:30.268 "product_name": "Logical Volume", 00:25:30.268 "block_size": 4096, 00:25:30.268 "num_blocks": 26476544, 00:25:30.268 "uuid": "7ddf2ba7-06b5-4f91-a573-499237ad8c86", 00:25:30.268 "assigned_rate_limits": { 00:25:30.268 "rw_ios_per_sec": 0, 00:25:30.268 "rw_mbytes_per_sec": 0, 00:25:30.268 "r_mbytes_per_sec": 0, 00:25:30.268 "w_mbytes_per_sec": 0 00:25:30.268 }, 00:25:30.268 "claimed": false, 00:25:30.268 "zoned": false, 00:25:30.268 "supported_io_types": { 00:25:30.268 "read": true, 00:25:30.268 "write": true, 00:25:30.268 "unmap": true, 00:25:30.268 "flush": false, 00:25:30.268 "reset": true, 00:25:30.268 "nvme_admin": false, 00:25:30.268 "nvme_io": false, 00:25:30.268 "nvme_io_md": false, 00:25:30.268 "write_zeroes": true, 00:25:30.268 "zcopy": false, 00:25:30.268 "get_zone_info": false, 00:25:30.268 "zone_management": false, 00:25:30.268 "zone_append": false, 00:25:30.268 "compare": false, 00:25:30.268 "compare_and_write": false, 00:25:30.268 "abort": false, 00:25:30.268 "seek_hole": true, 00:25:30.268 "seek_data": true, 00:25:30.268 "copy": false, 00:25:30.268 "nvme_iov_md": false 00:25:30.268 }, 00:25:30.268 "driver_specific": { 00:25:30.268 "lvol": { 00:25:30.268 "lvol_store_uuid": "ed23df3e-48b2-4522-bc4b-e562f768ad13", 00:25:30.268 "base_bdev": "nvme0n1", 00:25:30.268 "thin_provision": true, 00:25:30.268 "num_allocated_clusters": 0, 00:25:30.268 "snapshot": false, 00:25:30.268 "clone": false, 00:25:30.268 "esnap_clone": false 00:25:30.268 } 00:25:30.268 } 00:25:30.268 } 00:25:30.268 ]' 00:25:30.268 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:30.268 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:30.268 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:30.526 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:30.526 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:30.526 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:30.526 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:30.526 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:30.526 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:30.784 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:31.043 { 00:25:31.043 "name": "7ddf2ba7-06b5-4f91-a573-499237ad8c86", 00:25:31.043 "aliases": [ 00:25:31.043 "lvs/nvme0n1p0" 00:25:31.043 ], 00:25:31.043 "product_name": "Logical Volume", 00:25:31.043 "block_size": 4096, 00:25:31.043 "num_blocks": 26476544, 00:25:31.043 "uuid": "7ddf2ba7-06b5-4f91-a573-499237ad8c86", 00:25:31.043 "assigned_rate_limits": { 00:25:31.043 "rw_ios_per_sec": 0, 00:25:31.043 "rw_mbytes_per_sec": 0, 00:25:31.043 "r_mbytes_per_sec": 0, 00:25:31.043 "w_mbytes_per_sec": 0 00:25:31.043 }, 00:25:31.043 "claimed": false, 00:25:31.043 "zoned": false, 00:25:31.043 "supported_io_types": { 00:25:31.043 "read": true, 00:25:31.043 "write": true, 00:25:31.043 "unmap": true, 00:25:31.043 "flush": false, 00:25:31.043 "reset": true, 00:25:31.043 "nvme_admin": false, 00:25:31.043 "nvme_io": false, 00:25:31.043 "nvme_io_md": false, 00:25:31.043 "write_zeroes": true, 00:25:31.043 "zcopy": false, 00:25:31.043 "get_zone_info": false, 00:25:31.043 "zone_management": false, 00:25:31.043 "zone_append": false, 00:25:31.043 "compare": false, 00:25:31.043 "compare_and_write": false, 00:25:31.043 "abort": false, 00:25:31.043 "seek_hole": true, 00:25:31.043 "seek_data": true, 00:25:31.043 "copy": false, 00:25:31.043 "nvme_iov_md": false 00:25:31.043 }, 00:25:31.043 "driver_specific": { 00:25:31.043 "lvol": { 00:25:31.043 "lvol_store_uuid": "ed23df3e-48b2-4522-bc4b-e562f768ad13", 00:25:31.043 "base_bdev": "nvme0n1", 00:25:31.043 "thin_provision": true, 00:25:31.043 "num_allocated_clusters": 0, 00:25:31.043 "snapshot": false, 00:25:31.043 "clone": false, 00:25:31.043 "esnap_clone": false 00:25:31.043 } 00:25:31.043 } 00:25:31.043 } 00:25:31.043 ]' 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:31.043 19:47:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:31.302 19:47:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ddf2ba7-06b5-4f91-a573-499237ad8c86 00:25:31.560 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:31.560 { 00:25:31.560 "name": "7ddf2ba7-06b5-4f91-a573-499237ad8c86", 00:25:31.560 "aliases": [ 00:25:31.560 "lvs/nvme0n1p0" 00:25:31.560 ], 00:25:31.560 "product_name": "Logical Volume", 00:25:31.560 "block_size": 4096, 00:25:31.560 "num_blocks": 26476544, 00:25:31.561 "uuid": "7ddf2ba7-06b5-4f91-a573-499237ad8c86", 00:25:31.561 "assigned_rate_limits": { 00:25:31.561 "rw_ios_per_sec": 0, 00:25:31.561 "rw_mbytes_per_sec": 0, 00:25:31.561 "r_mbytes_per_sec": 0, 00:25:31.561 "w_mbytes_per_sec": 0 00:25:31.561 }, 00:25:31.561 "claimed": false, 00:25:31.561 "zoned": false, 00:25:31.561 "supported_io_types": { 00:25:31.561 "read": true, 00:25:31.561 "write": true, 00:25:31.561 "unmap": true, 00:25:31.561 "flush": false, 00:25:31.561 "reset": true, 00:25:31.561 "nvme_admin": false, 00:25:31.561 "nvme_io": false, 00:25:31.561 "nvme_io_md": false, 00:25:31.561 "write_zeroes": true, 00:25:31.561 "zcopy": false, 00:25:31.561 "get_zone_info": false, 00:25:31.561 "zone_management": false, 00:25:31.561 "zone_append": false, 00:25:31.561 "compare": false, 00:25:31.561 "compare_and_write": false, 00:25:31.561 "abort": false, 00:25:31.561 "seek_hole": true, 00:25:31.561 "seek_data": true, 00:25:31.561 "copy": false, 00:25:31.561 "nvme_iov_md": false 00:25:31.561 }, 00:25:31.561 "driver_specific": { 00:25:31.561 "lvol": { 00:25:31.561 "lvol_store_uuid": "ed23df3e-48b2-4522-bc4b-e562f768ad13", 00:25:31.561 "base_bdev": "nvme0n1", 00:25:31.561 "thin_provision": true, 00:25:31.561 "num_allocated_clusters": 0, 00:25:31.561 "snapshot": false, 00:25:31.561 "clone": false, 00:25:31.561 "esnap_clone": false 00:25:31.561 } 00:25:31.561 } 00:25:31.561 } 00:25:31.561 ]' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7ddf2ba7-06b5-4f91-a573-499237ad8c86 --l2p_dram_limit 10' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:31.561 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7ddf2ba7-06b5-4f91-a573-499237ad8c86 --l2p_dram_limit 10 -c nvc0n1p0 00:25:31.820 [2024-07-15 19:47:22.358984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.359043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.820 [2024-07-15 19:47:22.359059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:31.820 [2024-07-15 19:47:22.359073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.359138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.359153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.820 [2024-07-15 19:47:22.359164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:31.820 [2024-07-15 19:47:22.359177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.359199] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.820 [2024-07-15 19:47:22.360448] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.820 [2024-07-15 19:47:22.360479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.360496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.820 [2024-07-15 19:47:22.360507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:25:31.820 [2024-07-15 19:47:22.360519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.360605] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8cd2928b-48a6-4d1e-8e3e-dde3bed8b2dd 00:25:31.820 [2024-07-15 19:47:22.362073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.362107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:31.820 [2024-07-15 19:47:22.362122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:31.820 [2024-07-15 19:47:22.362133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.369525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.369552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.820 [2024-07-15 19:47:22.369570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.335 ms 00:25:31.820 [2024-07-15 19:47:22.369596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.369699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.369714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.820 [2024-07-15 19:47:22.369727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:31.820 [2024-07-15 19:47:22.369737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.369820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.369833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.820 [2024-07-15 19:47:22.369860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:31.820 [2024-07-15 19:47:22.369874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.369903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:31.820 [2024-07-15 19:47:22.375752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.375795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.820 [2024-07-15 19:47:22.375823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.859 ms 00:25:31.820 [2024-07-15 19:47:22.375838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.375875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.820 [2024-07-15 19:47:22.375889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.820 [2024-07-15 19:47:22.375899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:31.820 [2024-07-15 19:47:22.375912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.820 [2024-07-15 19:47:22.375954] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:31.820 [2024-07-15 19:47:22.376089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:31.820 [2024-07-15 19:47:22.376104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.820 [2024-07-15 19:47:22.376122] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:31.820 [2024-07-15 19:47:22.376135] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376149] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376160] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:31.821 [2024-07-15 19:47:22.376173] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.821 [2024-07-15 19:47:22.376186] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:31.821 [2024-07-15 19:47:22.376200] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:31.821 [2024-07-15 19:47:22.376210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.821 [2024-07-15 19:47:22.376222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.821 [2024-07-15 19:47:22.376232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:25:31.821 [2024-07-15 19:47:22.376244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.821 [2024-07-15 19:47:22.376316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.821 [2024-07-15 19:47:22.376329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.821 [2024-07-15 19:47:22.376339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:31.821 [2024-07-15 19:47:22.376351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.821 [2024-07-15 19:47:22.376440] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.821 [2024-07-15 19:47:22.376457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.821 [2024-07-15 19:47:22.376478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.821 [2024-07-15 19:47:22.376513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.821 [2024-07-15 19:47:22.376544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.821 [2024-07-15 19:47:22.376565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.821 [2024-07-15 19:47:22.376576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:31.821 [2024-07-15 19:47:22.376586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.821 [2024-07-15 19:47:22.376599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.821 [2024-07-15 19:47:22.376609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:31.821 [2024-07-15 19:47:22.376621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.821 [2024-07-15 19:47:22.376644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.821 [2024-07-15 19:47:22.376674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.821 [2024-07-15 19:47:22.376706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.821 [2024-07-15 19:47:22.376736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.821 [2024-07-15 19:47:22.376768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.821 [2024-07-15 19:47:22.376808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376823] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.821 [2024-07-15 19:47:22.376833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.821 [2024-07-15 19:47:22.376844] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:31.821 [2024-07-15 19:47:22.376853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.821 [2024-07-15 19:47:22.376865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:31.821 [2024-07-15 19:47:22.376874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:31.821 [2024-07-15 19:47:22.376887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:31.821 [2024-07-15 19:47:22.376908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:31.821 [2024-07-15 19:47:22.376917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376928] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.821 [2024-07-15 19:47:22.376938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.821 [2024-07-15 19:47:22.376949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.821 [2024-07-15 19:47:22.376960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.821 [2024-07-15 19:47:22.376973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.821 [2024-07-15 19:47:22.376983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.821 [2024-07-15 19:47:22.376997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.821 [2024-07-15 19:47:22.377006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.821 [2024-07-15 19:47:22.377017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.821 [2024-07-15 19:47:22.377027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.821 [2024-07-15 19:47:22.377043] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.821 [2024-07-15 19:47:22.377056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:31.821 [2024-07-15 19:47:22.377083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:31.821 [2024-07-15 19:47:22.377096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:31.821 [2024-07-15 19:47:22.377106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:31.821 [2024-07-15 19:47:22.377119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:31.821 [2024-07-15 19:47:22.377129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:31.821 [2024-07-15 19:47:22.377141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:31.821 [2024-07-15 19:47:22.377151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:31.821 [2024-07-15 19:47:22.377165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:31.821 [2024-07-15 19:47:22.377175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:31.821 [2024-07-15 19:47:22.377235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.821 [2024-07-15 19:47:22.377247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.821 [2024-07-15 19:47:22.377271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.821 [2024-07-15 19:47:22.377283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.821 [2024-07-15 19:47:22.377293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.821 [2024-07-15 19:47:22.377306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.821 [2024-07-15 19:47:22.377316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.821 [2024-07-15 19:47:22.377328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:25:31.821 [2024-07-15 19:47:22.377342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.821 [2024-07-15 19:47:22.377385] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:31.821 [2024-07-15 19:47:22.377398] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:36.007 [2024-07-15 19:47:26.086565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.086620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:36.007 [2024-07-15 19:47:26.086639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3709.152 ms 00:25:36.007 [2024-07-15 19:47:26.086651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.131317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.131369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.007 [2024-07-15 19:47:26.131387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.366 ms 00:25:36.007 [2024-07-15 19:47:26.131397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.131547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.131560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.007 [2024-07-15 19:47:26.131574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:36.007 [2024-07-15 19:47:26.131587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.181950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.181994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.007 [2024-07-15 19:47:26.182011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.316 ms 00:25:36.007 [2024-07-15 19:47:26.182022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.182067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.182086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.007 [2024-07-15 19:47:26.182100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:36.007 [2024-07-15 19:47:26.182110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.182586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.182601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.007 [2024-07-15 19:47:26.182614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:25:36.007 [2024-07-15 19:47:26.182624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.182734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.182747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.007 [2024-07-15 19:47:26.182763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:36.007 [2024-07-15 19:47:26.182773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.205139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.205183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.007 [2024-07-15 19:47:26.205201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.317 ms 00:25:36.007 [2024-07-15 19:47:26.205211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.218615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.007 [2024-07-15 19:47:26.221822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.221855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.007 [2024-07-15 19:47:26.221867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.522 ms 00:25:36.007 [2024-07-15 19:47:26.221897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.367701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.367764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:36.007 [2024-07-15 19:47:26.367810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 145.769 ms 00:25:36.007 [2024-07-15 19:47:26.367824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.368011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.368030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.007 [2024-07-15 19:47:26.368041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:36.007 [2024-07-15 19:47:26.368057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.407641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.407684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:36.007 [2024-07-15 19:47:26.407699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.535 ms 00:25:36.007 [2024-07-15 19:47:26.407711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.446619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.446672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:36.007 [2024-07-15 19:47:26.446687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.864 ms 00:25:36.007 [2024-07-15 19:47:26.446699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.447545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.447575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.007 [2024-07-15 19:47:26.447587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:25:36.007 [2024-07-15 19:47:26.447603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.562408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.007 [2024-07-15 19:47:26.562476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:36.007 [2024-07-15 19:47:26.562493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.750 ms 00:25:36.007 [2024-07-15 19:47:26.562511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.007 [2024-07-15 19:47:26.601984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.008 [2024-07-15 19:47:26.602029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:36.008 [2024-07-15 19:47:26.602059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.428 ms 00:25:36.008 [2024-07-15 19:47:26.602072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.008 [2024-07-15 19:47:26.640815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.008 [2024-07-15 19:47:26.640856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:36.008 [2024-07-15 19:47:26.640869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.701 ms 00:25:36.008 [2024-07-15 19:47:26.640898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.008 [2024-07-15 19:47:26.679505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.008 [2024-07-15 19:47:26.679549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.008 [2024-07-15 19:47:26.679563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.567 ms 00:25:36.008 [2024-07-15 19:47:26.679576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.008 [2024-07-15 19:47:26.679630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.008 [2024-07-15 19:47:26.679646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.008 [2024-07-15 19:47:26.679657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:36.008 [2024-07-15 19:47:26.679673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.008 [2024-07-15 19:47:26.679808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.008 [2024-07-15 19:47:26.679827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.008 [2024-07-15 19:47:26.679842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:36.008 [2024-07-15 19:47:26.679854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.008 [2024-07-15 19:47:26.680965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4321.383 ms, result 0 00:25:36.008 { 00:25:36.008 "name": "ftl0", 00:25:36.008 "uuid": "8cd2928b-48a6-4d1e-8e3e-dde3bed8b2dd" 00:25:36.008 } 00:25:36.008 19:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:36.008 19:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:36.266 19:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:36.266 19:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:36.266 19:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:36.525 /dev/nbd0 00:25:36.525 19:47:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:36.525 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:36.525 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:25:36.525 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:36.526 1+0 records in 00:25:36.526 1+0 records out 00:25:36.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432333 s, 9.5 MB/s 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:25:36.526 19:47:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:36.784 [2024-07-15 19:47:27.342759] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:36.784 [2024-07-15 19:47:27.342936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84064 ] 00:25:36.784 [2024-07-15 19:47:27.525858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.043 [2024-07-15 19:47:27.753753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.971  Copying: 207/1024 [MB] (207 MBps) Copying: 415/1024 [MB] (207 MBps) Copying: 623/1024 [MB] (208 MBps) Copying: 827/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:25:43.971 00:25:43.971 19:47:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:45.878 19:47:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:45.878 [2024-07-15 19:47:36.356255] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:45.878 [2024-07-15 19:47:36.356439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84151 ] 00:25:45.878 [2024-07-15 19:47:36.538390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.137 [2024-07-15 19:47:36.773794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.187  Copying: 18/1024 [MB] (18 MBps) Copying: 38/1024 [MB] (19 MBps) Copying: 57/1024 [MB] (18 MBps) Copying: 76/1024 [MB] (19 MBps) Copying: 95/1024 [MB] (18 MBps) Copying: 113/1024 [MB] (18 MBps) Copying: 132/1024 [MB] (18 MBps) Copying: 151/1024 [MB] (18 MBps) Copying: 170/1024 [MB] (19 MBps) Copying: 189/1024 [MB] (19 MBps) Copying: 207/1024 [MB] (18 MBps) Copying: 226/1024 [MB] (18 MBps) Copying: 245/1024 [MB] (18 MBps) Copying: 264/1024 [MB] (18 MBps) Copying: 283/1024 [MB] (18 MBps) Copying: 301/1024 [MB] (18 MBps) Copying: 320/1024 [MB] (19 MBps) Copying: 339/1024 [MB] (19 MBps) Copying: 358/1024 [MB] (18 MBps) Copying: 377/1024 [MB] (18 MBps) Copying: 396/1024 [MB] (18 MBps) Copying: 415/1024 [MB] (19 MBps) Copying: 434/1024 [MB] (18 MBps) Copying: 453/1024 [MB] (19 MBps) Copying: 473/1024 [MB] (19 MBps) Copying: 492/1024 [MB] (19 MBps) Copying: 511/1024 [MB] (19 MBps) Copying: 530/1024 [MB] (18 MBps) Copying: 549/1024 [MB] (18 MBps) Copying: 568/1024 [MB] (18 MBps) Copying: 586/1024 [MB] (18 MBps) Copying: 604/1024 [MB] (18 MBps) Copying: 623/1024 [MB] (18 MBps) Copying: 641/1024 [MB] (18 MBps) Copying: 660/1024 [MB] (18 MBps) Copying: 679/1024 [MB] (18 MBps) Copying: 697/1024 [MB] (18 MBps) Copying: 715/1024 [MB] (18 MBps) Copying: 734/1024 [MB] (18 MBps) Copying: 753/1024 [MB] (18 MBps) Copying: 771/1024 [MB] (18 MBps) Copying: 790/1024 [MB] (18 MBps) Copying: 809/1024 [MB] (19 MBps) Copying: 828/1024 [MB] (18 MBps) Copying: 846/1024 [MB] (18 MBps) Copying: 865/1024 [MB] (18 MBps) Copying: 884/1024 [MB] (18 MBps) Copying: 904/1024 [MB] (19 MBps) Copying: 924/1024 [MB] (20 MBps) Copying: 943/1024 [MB] (19 MBps) Copying: 962/1024 [MB] (19 MBps) Copying: 981/1024 [MB] (18 MBps) Copying: 1000/1024 [MB] (18 MBps) Copying: 1019/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 18 MBps) 00:26:42.187 00:26:42.187 19:48:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:42.187 19:48:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:42.187 19:48:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:42.446 [2024-07-15 19:48:33.114662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.114714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:42.446 [2024-07-15 19:48:33.114743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:42.446 [2024-07-15 19:48:33.114754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.114802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:42.446 [2024-07-15 19:48:33.118888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.118927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:42.446 [2024-07-15 19:48:33.118940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:26:42.446 [2024-07-15 19:48:33.118955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.120698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.120748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:42.446 [2024-07-15 19:48:33.120761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:26:42.446 [2024-07-15 19:48:33.120774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.138045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.138090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:42.446 [2024-07-15 19:48:33.138105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.237 ms 00:26:42.446 [2024-07-15 19:48:33.138117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.143290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.143327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:42.446 [2024-07-15 19:48:33.143339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.135 ms 00:26:42.446 [2024-07-15 19:48:33.143352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.182638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.182679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:42.446 [2024-07-15 19:48:33.182693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.204 ms 00:26:42.446 [2024-07-15 19:48:33.182706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.205928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.205974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:42.446 [2024-07-15 19:48:33.205992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.182 ms 00:26:42.446 [2024-07-15 19:48:33.206010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.446 [2024-07-15 19:48:33.206164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.446 [2024-07-15 19:48:33.206181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:42.446 [2024-07-15 19:48:33.206192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:42.446 [2024-07-15 19:48:33.206204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.705 [2024-07-15 19:48:33.244866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.705 [2024-07-15 19:48:33.244907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:42.705 [2024-07-15 19:48:33.244921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.644 ms 00:26:42.705 [2024-07-15 19:48:33.244933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.705 [2024-07-15 19:48:33.284346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.705 [2024-07-15 19:48:33.284386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:42.705 [2024-07-15 19:48:33.284415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.374 ms 00:26:42.705 [2024-07-15 19:48:33.284427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.705 [2024-07-15 19:48:33.322155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.705 [2024-07-15 19:48:33.322208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:42.705 [2024-07-15 19:48:33.322237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.689 ms 00:26:42.705 [2024-07-15 19:48:33.322249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.706 [2024-07-15 19:48:33.359740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.706 [2024-07-15 19:48:33.359792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:42.706 [2024-07-15 19:48:33.359806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.401 ms 00:26:42.706 [2024-07-15 19:48:33.359818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.706 [2024-07-15 19:48:33.359872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:42.706 [2024-07-15 19:48:33.359896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.359987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.360999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:42.706 [2024-07-15 19:48:33.361111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:42.707 [2024-07-15 19:48:33.361122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:42.707 [2024-07-15 19:48:33.361137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:42.707 [2024-07-15 19:48:33.361148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:42.707 [2024-07-15 19:48:33.361168] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:42.707 [2024-07-15 19:48:33.361178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8cd2928b-48a6-4d1e-8e3e-dde3bed8b2dd 00:26:42.707 [2024-07-15 19:48:33.361192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:42.707 [2024-07-15 19:48:33.361201] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:42.707 [2024-07-15 19:48:33.361222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:42.707 [2024-07-15 19:48:33.361232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:42.707 [2024-07-15 19:48:33.361244] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:42.707 [2024-07-15 19:48:33.361254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:42.707 [2024-07-15 19:48:33.361266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:42.707 [2024-07-15 19:48:33.361275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:42.707 [2024-07-15 19:48:33.361286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:42.707 [2024-07-15 19:48:33.361296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.707 [2024-07-15 19:48:33.361308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:42.707 [2024-07-15 19:48:33.361319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:26:42.707 [2024-07-15 19:48:33.361332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.707 [2024-07-15 19:48:33.382359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.707 [2024-07-15 19:48:33.382395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:42.707 [2024-07-15 19:48:33.382408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.976 ms 00:26:42.707 [2024-07-15 19:48:33.382421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.707 [2024-07-15 19:48:33.382960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.707 [2024-07-15 19:48:33.382988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:42.707 [2024-07-15 19:48:33.382999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:26:42.707 [2024-07-15 19:48:33.383011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.707 [2024-07-15 19:48:33.447246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.707 [2024-07-15 19:48:33.447286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.707 [2024-07-15 19:48:33.447300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.707 [2024-07-15 19:48:33.447313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.707 [2024-07-15 19:48:33.447374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.707 [2024-07-15 19:48:33.447391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.707 [2024-07-15 19:48:33.447401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.707 [2024-07-15 19:48:33.447413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.707 [2024-07-15 19:48:33.447489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.707 [2024-07-15 19:48:33.447510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.707 [2024-07-15 19:48:33.447521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.707 [2024-07-15 19:48:33.447533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.707 [2024-07-15 19:48:33.447553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.707 [2024-07-15 19:48:33.447574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.707 [2024-07-15 19:48:33.447585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.707 [2024-07-15 19:48:33.447597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.569419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.569482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.965 [2024-07-15 19:48:33.569513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.569526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.965 [2024-07-15 19:48:33.671223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.965 [2024-07-15 19:48:33.671376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.965 [2024-07-15 19:48:33.671464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.965 [2024-07-15 19:48:33.671606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:42.965 [2024-07-15 19:48:33.671683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.965 [2024-07-15 19:48:33.671758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.671858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.965 [2024-07-15 19:48:33.671876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.965 [2024-07-15 19:48:33.671887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.965 [2024-07-15 19:48:33.671899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.965 [2024-07-15 19:48:33.672033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.338 ms, result 0 00:26:42.965 true 00:26:42.965 19:48:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83904 00:26:42.965 19:48:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83904 00:26:42.965 19:48:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:43.224 [2024-07-15 19:48:33.815212] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:26:43.224 [2024-07-15 19:48:33.815368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84732 ] 00:26:43.224 [2024-07-15 19:48:33.993985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.483 [2024-07-15 19:48:34.217495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.424  Copying: 209/1024 [MB] (209 MBps) Copying: 420/1024 [MB] (211 MBps) Copying: 629/1024 [MB] (208 MBps) Copying: 831/1024 [MB] (202 MBps) Copying: 1022/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:26:50.424 00:26:50.424 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83904 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:50.424 19:48:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:50.424 [2024-07-15 19:48:41.037279] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:26:50.424 [2024-07-15 19:48:41.037444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84812 ] 00:26:50.683 [2024-07-15 19:48:41.216579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.683 [2024-07-15 19:48:41.449894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.324 [2024-07-15 19:48:41.855825] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.324 [2024-07-15 19:48:41.855887] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.324 [2024-07-15 19:48:41.922057] blobstore.c:4888:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:51.324 [2024-07-15 19:48:41.922332] blobstore.c:4835:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:51.324 [2024-07-15 19:48:41.922475] blobstore.c:4835:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:51.583 [2024-07-15 19:48:42.175963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.176012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:51.583 [2024-07-15 19:48:42.176055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:51.583 [2024-07-15 19:48:42.176065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.176120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.176134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:51.583 [2024-07-15 19:48:42.176145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:51.583 [2024-07-15 19:48:42.176158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.176179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:51.583 [2024-07-15 19:48:42.177219] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:51.583 [2024-07-15 19:48:42.177241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.177251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:51.583 [2024-07-15 19:48:42.177262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:26:51.583 [2024-07-15 19:48:42.177272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.178644] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:51.583 [2024-07-15 19:48:42.198261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.198297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:51.583 [2024-07-15 19:48:42.198311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.618 ms 00:26:51.583 [2024-07-15 19:48:42.198342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.198401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.198413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:51.583 [2024-07-15 19:48:42.198424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:51.583 [2024-07-15 19:48:42.198434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.205092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.205122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:51.583 [2024-07-15 19:48:42.205134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.583 ms 00:26:51.583 [2024-07-15 19:48:42.205160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.205236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.205250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:51.583 [2024-07-15 19:48:42.205261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:51.583 [2024-07-15 19:48:42.205271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.205311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.205322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:51.583 [2024-07-15 19:48:42.205336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:51.583 [2024-07-15 19:48:42.205346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.205371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:51.583 [2024-07-15 19:48:42.210861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.210892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:51.583 [2024-07-15 19:48:42.210905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.496 ms 00:26:51.583 [2024-07-15 19:48:42.210915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.210948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.583 [2024-07-15 19:48:42.210959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:51.583 [2024-07-15 19:48:42.210970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:51.583 [2024-07-15 19:48:42.210980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.583 [2024-07-15 19:48:42.211030] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:51.583 [2024-07-15 19:48:42.211053] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:51.583 [2024-07-15 19:48:42.211091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:51.583 [2024-07-15 19:48:42.211109] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:51.583 [2024-07-15 19:48:42.211195] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:51.583 [2024-07-15 19:48:42.211208] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:51.583 [2024-07-15 19:48:42.211221] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:51.583 [2024-07-15 19:48:42.211234] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:51.583 [2024-07-15 19:48:42.211246] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:51.583 [2024-07-15 19:48:42.211260] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:51.584 [2024-07-15 19:48:42.211270] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:51.584 [2024-07-15 19:48:42.211279] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:51.584 [2024-07-15 19:48:42.211289] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:51.584 [2024-07-15 19:48:42.211308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.584 [2024-07-15 19:48:42.211318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:51.584 [2024-07-15 19:48:42.211328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:26:51.584 [2024-07-15 19:48:42.211338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.584 [2024-07-15 19:48:42.211406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.584 [2024-07-15 19:48:42.211417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:51.584 [2024-07-15 19:48:42.211427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:51.584 [2024-07-15 19:48:42.211441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.584 [2024-07-15 19:48:42.211525] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:51.584 [2024-07-15 19:48:42.211538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:51.584 [2024-07-15 19:48:42.211549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:51.584 [2024-07-15 19:48:42.211578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:51.584 [2024-07-15 19:48:42.211606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.584 [2024-07-15 19:48:42.211624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:51.584 [2024-07-15 19:48:42.211634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:51.584 [2024-07-15 19:48:42.211644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.584 [2024-07-15 19:48:42.211653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:51.584 [2024-07-15 19:48:42.211663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:51.584 [2024-07-15 19:48:42.211672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:51.584 [2024-07-15 19:48:42.211700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:51.584 [2024-07-15 19:48:42.211728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:51.584 [2024-07-15 19:48:42.211755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:51.584 [2024-07-15 19:48:42.211795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:51.584 [2024-07-15 19:48:42.211823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:51.584 [2024-07-15 19:48:42.211851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.584 [2024-07-15 19:48:42.211868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:51.584 [2024-07-15 19:48:42.211877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:51.584 [2024-07-15 19:48:42.211887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.584 [2024-07-15 19:48:42.211896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:51.584 [2024-07-15 19:48:42.211905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:51.584 [2024-07-15 19:48:42.211914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:51.584 [2024-07-15 19:48:42.211931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:51.584 [2024-07-15 19:48:42.211942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211951] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:51.584 [2024-07-15 19:48:42.211961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:51.584 [2024-07-15 19:48:42.211971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.584 [2024-07-15 19:48:42.211980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.584 [2024-07-15 19:48:42.211994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:51.584 [2024-07-15 19:48:42.212004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:51.584 [2024-07-15 19:48:42.212013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:51.584 [2024-07-15 19:48:42.212022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:51.584 [2024-07-15 19:48:42.212032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:51.584 [2024-07-15 19:48:42.212041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:51.584 [2024-07-15 19:48:42.212052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:51.584 [2024-07-15 19:48:42.212063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:51.584 [2024-07-15 19:48:42.212085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:51.584 [2024-07-15 19:48:42.212095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:51.584 [2024-07-15 19:48:42.212105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:51.584 [2024-07-15 19:48:42.212116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:51.584 [2024-07-15 19:48:42.212126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:51.584 [2024-07-15 19:48:42.212136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:51.584 [2024-07-15 19:48:42.212146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:51.584 [2024-07-15 19:48:42.212156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:51.584 [2024-07-15 19:48:42.212166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:51.584 [2024-07-15 19:48:42.212216] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:51.584 [2024-07-15 19:48:42.212227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:51.584 [2024-07-15 19:48:42.212247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:51.584 [2024-07-15 19:48:42.212257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:51.584 [2024-07-15 19:48:42.212270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:51.584 [2024-07-15 19:48:42.212281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.584 [2024-07-15 19:48:42.212291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:51.584 [2024-07-15 19:48:42.212301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:26:51.584 [2024-07-15 19:48:42.212311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.584 [2024-07-15 19:48:42.266362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.266408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.585 [2024-07-15 19:48:42.266423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.001 ms 00:26:51.585 [2024-07-15 19:48:42.266434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.266537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.266549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.585 [2024-07-15 19:48:42.266564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:51.585 [2024-07-15 19:48:42.266574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.318691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.318735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.585 [2024-07-15 19:48:42.318749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.048 ms 00:26:51.585 [2024-07-15 19:48:42.318759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.318842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.318855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.585 [2024-07-15 19:48:42.318866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:51.585 [2024-07-15 19:48:42.318876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.319333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.319351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.585 [2024-07-15 19:48:42.319362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:26:51.585 [2024-07-15 19:48:42.319372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.319485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.319501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.585 [2024-07-15 19:48:42.319511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:51.585 [2024-07-15 19:48:42.319520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.340227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.340265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.585 [2024-07-15 19:48:42.340279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.684 ms 00:26:51.585 [2024-07-15 19:48:42.340289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.585 [2024-07-15 19:48:42.360601] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:51.585 [2024-07-15 19:48:42.360641] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:51.585 [2024-07-15 19:48:42.360656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.585 [2024-07-15 19:48:42.360683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:51.585 [2024-07-15 19:48:42.360694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.248 ms 00:26:51.585 [2024-07-15 19:48:42.360704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.391029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.391086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:51.843 [2024-07-15 19:48:42.391100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.279 ms 00:26:51.843 [2024-07-15 19:48:42.391111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.411365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.411401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:51.843 [2024-07-15 19:48:42.411414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.203 ms 00:26:51.843 [2024-07-15 19:48:42.411424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.430672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.430708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:51.843 [2024-07-15 19:48:42.430721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.210 ms 00:26:51.843 [2024-07-15 19:48:42.430731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.431645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.431677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.843 [2024-07-15 19:48:42.431689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:26:51.843 [2024-07-15 19:48:42.431700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.523551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.523632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:51.843 [2024-07-15 19:48:42.523647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.827 ms 00:26:51.843 [2024-07-15 19:48:42.523674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.536209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:51.843 [2024-07-15 19:48:42.539238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.539271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.843 [2024-07-15 19:48:42.539285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.504 ms 00:26:51.843 [2024-07-15 19:48:42.539296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.539390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.539406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:51.843 [2024-07-15 19:48:42.539417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:51.843 [2024-07-15 19:48:42.539428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.539498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.539511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.843 [2024-07-15 19:48:42.539522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:51.843 [2024-07-15 19:48:42.539532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.539552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.539563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.843 [2024-07-15 19:48:42.539576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:51.843 [2024-07-15 19:48:42.539586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.539619] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:51.843 [2024-07-15 19:48:42.539632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.539642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:51.843 [2024-07-15 19:48:42.539663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:51.843 [2024-07-15 19:48:42.539672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.578865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.578910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.843 [2024-07-15 19:48:42.578928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.173 ms 00:26:51.843 [2024-07-15 19:48:42.578938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.579010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.843 [2024-07-15 19:48:42.579022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.843 [2024-07-15 19:48:42.579033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:51.843 [2024-07-15 19:48:42.579044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.843 [2024-07-15 19:48:42.580128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.682 ms, result 0 00:27:25.468  Copying: 31/1024 [MB] (31 MBps) Copying: 64/1024 [MB] (32 MBps) Copying: 95/1024 [MB] (31 MBps) Copying: 127/1024 [MB] (31 MBps) Copying: 160/1024 [MB] (33 MBps) Copying: 194/1024 [MB] (33 MBps) Copying: 225/1024 [MB] (31 MBps) Copying: 256/1024 [MB] (30 MBps) Copying: 287/1024 [MB] (30 MBps) Copying: 318/1024 [MB] (31 MBps) Copying: 348/1024 [MB] (30 MBps) Copying: 380/1024 [MB] (31 MBps) Copying: 410/1024 [MB] (30 MBps) Copying: 441/1024 [MB] (30 MBps) Copying: 472/1024 [MB] (30 MBps) Copying: 502/1024 [MB] (30 MBps) Copying: 532/1024 [MB] (29 MBps) Copying: 562/1024 [MB] (30 MBps) Copying: 594/1024 [MB] (31 MBps) Copying: 625/1024 [MB] (31 MBps) Copying: 657/1024 [MB] (32 MBps) Copying: 688/1024 [MB] (31 MBps) Copying: 718/1024 [MB] (30 MBps) Copying: 749/1024 [MB] (30 MBps) Copying: 779/1024 [MB] (30 MBps) Copying: 811/1024 [MB] (31 MBps) Copying: 843/1024 [MB] (31 MBps) Copying: 874/1024 [MB] (31 MBps) Copying: 906/1024 [MB] (31 MBps) Copying: 938/1024 [MB] (31 MBps) Copying: 970/1024 [MB] (32 MBps) Copying: 1001/1024 [MB] (31 MBps) Copying: 1023/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-15 19:49:16.075070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.075151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.468 [2024-07-15 19:49:16.075169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:25.468 [2024-07-15 19:49:16.075180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.468 [2024-07-15 19:49:16.075757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.468 [2024-07-15 19:49:16.081454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.081493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.468 [2024-07-15 19:49:16.081507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:27:25.468 [2024-07-15 19:49:16.081517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.468 [2024-07-15 19:49:16.091327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.091374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.468 [2024-07-15 19:49:16.091387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.690 ms 00:27:25.468 [2024-07-15 19:49:16.091397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.468 [2024-07-15 19:49:16.111558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.111604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.468 [2024-07-15 19:49:16.111620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.143 ms 00:27:25.468 [2024-07-15 19:49:16.111632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.468 [2024-07-15 19:49:16.116939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.116972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.468 [2024-07-15 19:49:16.116990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.272 ms 00:27:25.468 [2024-07-15 19:49:16.117000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.468 [2024-07-15 19:49:16.155103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.155142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.468 [2024-07-15 19:49:16.155156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.045 ms 00:27:25.468 [2024-07-15 19:49:16.155166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.468 [2024-07-15 19:49:16.176875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.468 [2024-07-15 19:49:16.176912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.468 [2024-07-15 19:49:16.176926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.672 ms 00:27:25.468 [2024-07-15 19:49:16.176952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.735 [2024-07-15 19:49:16.271691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.735 [2024-07-15 19:49:16.271762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.735 [2024-07-15 19:49:16.271792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.698 ms 00:27:25.735 [2024-07-15 19:49:16.271804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.735 [2024-07-15 19:49:16.311522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.735 [2024-07-15 19:49:16.311564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:25.735 [2024-07-15 19:49:16.311578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.691 ms 00:27:25.736 [2024-07-15 19:49:16.311589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.736 [2024-07-15 19:49:16.349441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.736 [2024-07-15 19:49:16.349477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:25.736 [2024-07-15 19:49:16.349489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.815 ms 00:27:25.736 [2024-07-15 19:49:16.349514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.736 [2024-07-15 19:49:16.386909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.736 [2024-07-15 19:49:16.386958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.736 [2024-07-15 19:49:16.386971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.359 ms 00:27:25.736 [2024-07-15 19:49:16.386981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.736 [2024-07-15 19:49:16.424090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.736 [2024-07-15 19:49:16.424125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.736 [2024-07-15 19:49:16.424137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.035 ms 00:27:25.736 [2024-07-15 19:49:16.424162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.736 [2024-07-15 19:49:16.424196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.736 [2024-07-15 19:49:16.424212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102912 / 261120 wr_cnt: 1 state: open 00:27:25.736 [2024-07-15 19:49:16.424225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.736 [2024-07-15 19:49:16.424767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.424991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.737 [2024-07-15 19:49:16.425335] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.737 [2024-07-15 19:49:16.425345] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8cd2928b-48a6-4d1e-8e3e-dde3bed8b2dd 00:27:25.737 [2024-07-15 19:49:16.425356] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102912 00:27:25.737 [2024-07-15 19:49:16.425369] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103872 00:27:25.737 [2024-07-15 19:49:16.425381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102912 00:27:25.737 [2024-07-15 19:49:16.425393] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:27:25.737 [2024-07-15 19:49:16.425402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.737 [2024-07-15 19:49:16.425412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.737 [2024-07-15 19:49:16.425422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.737 [2024-07-15 19:49:16.425431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.737 [2024-07-15 19:49:16.425440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.737 [2024-07-15 19:49:16.425450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.737 [2024-07-15 19:49:16.425460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.737 [2024-07-15 19:49:16.425481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:27:25.737 [2024-07-15 19:49:16.425490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.737 [2024-07-15 19:49:16.445988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.737 [2024-07-15 19:49:16.446026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.737 [2024-07-15 19:49:16.446038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.464 ms 00:27:25.737 [2024-07-15 19:49:16.446064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.737 [2024-07-15 19:49:16.446564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.737 [2024-07-15 19:49:16.446584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.737 [2024-07-15 19:49:16.446595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:27:25.737 [2024-07-15 19:49:16.446605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.737 [2024-07-15 19:49:16.490592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.737 [2024-07-15 19:49:16.490627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.737 [2024-07-15 19:49:16.490639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.737 [2024-07-15 19:49:16.490650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.737 [2024-07-15 19:49:16.490704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.737 [2024-07-15 19:49:16.490715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.738 [2024-07-15 19:49:16.490725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.738 [2024-07-15 19:49:16.490734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.738 [2024-07-15 19:49:16.490814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.738 [2024-07-15 19:49:16.490844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.738 [2024-07-15 19:49:16.490855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.738 [2024-07-15 19:49:16.490865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.738 [2024-07-15 19:49:16.490882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.738 [2024-07-15 19:49:16.490893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.738 [2024-07-15 19:49:16.490903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.738 [2024-07-15 19:49:16.490931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.611611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.611668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:26.006 [2024-07-15 19:49:16.611683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.006 [2024-07-15 19:49:16.611693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.717754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.717830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:26.006 [2024-07-15 19:49:16.717844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.006 [2024-07-15 19:49:16.717854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.717921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.717937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:26.006 [2024-07-15 19:49:16.717947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.006 [2024-07-15 19:49:16.717957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.718009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.718020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:26.006 [2024-07-15 19:49:16.718031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.006 [2024-07-15 19:49:16.718040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.718147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.718165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:26.006 [2024-07-15 19:49:16.718180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.006 [2024-07-15 19:49:16.718189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.718224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.718236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:26.006 [2024-07-15 19:49:16.718246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.006 [2024-07-15 19:49:16.718256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.006 [2024-07-15 19:49:16.718291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.006 [2024-07-15 19:49:16.718302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:26.006 [2024-07-15 19:49:16.718315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.007 [2024-07-15 19:49:16.718325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.007 [2024-07-15 19:49:16.718366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.007 [2024-07-15 19:49:16.718377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:26.007 [2024-07-15 19:49:16.718387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.007 [2024-07-15 19:49:16.718397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.007 [2024-07-15 19:49:16.718522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 647.004 ms, result 0 00:27:28.538 00:27:28.538 00:27:28.538 19:49:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:29.927 19:49:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:29.927 [2024-07-15 19:49:20.617834] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:27:29.927 [2024-07-15 19:49:20.617945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85194 ] 00:27:30.186 [2024-07-15 19:49:20.786807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.444 [2024-07-15 19:49:21.061904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.705 [2024-07-15 19:49:21.453083] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:30.705 [2024-07-15 19:49:21.453150] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:30.965 [2024-07-15 19:49:21.613879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.613921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:30.965 [2024-07-15 19:49:21.613937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:30.965 [2024-07-15 19:49:21.613947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.614001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.614014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:30.965 [2024-07-15 19:49:21.614025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:30.965 [2024-07-15 19:49:21.614038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.614059] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:30.965 [2024-07-15 19:49:21.615243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:30.965 [2024-07-15 19:49:21.615274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.615289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:30.965 [2024-07-15 19:49:21.615301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:27:30.965 [2024-07-15 19:49:21.615311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.616704] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:30.965 [2024-07-15 19:49:21.637242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.637281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:30.965 [2024-07-15 19:49:21.637296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.538 ms 00:27:30.965 [2024-07-15 19:49:21.637307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.637372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.637384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:30.965 [2024-07-15 19:49:21.637398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:30.965 [2024-07-15 19:49:21.637408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.644020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.644049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:30.965 [2024-07-15 19:49:21.644061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.543 ms 00:27:30.965 [2024-07-15 19:49:21.644071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.644150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.644167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:30.965 [2024-07-15 19:49:21.644178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:30.965 [2024-07-15 19:49:21.644188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.644231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.644243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:30.965 [2024-07-15 19:49:21.644254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:30.965 [2024-07-15 19:49:21.644264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.644290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:30.965 [2024-07-15 19:49:21.649796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.649826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:30.965 [2024-07-15 19:49:21.649838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.512 ms 00:27:30.965 [2024-07-15 19:49:21.649848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.649885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.649895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:30.965 [2024-07-15 19:49:21.649905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:30.965 [2024-07-15 19:49:21.649915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.649964] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:30.965 [2024-07-15 19:49:21.649988] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:30.965 [2024-07-15 19:49:21.650021] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:30.965 [2024-07-15 19:49:21.650040] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:30.965 [2024-07-15 19:49:21.650140] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:30.965 [2024-07-15 19:49:21.650153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:30.965 [2024-07-15 19:49:21.650166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:30.965 [2024-07-15 19:49:21.650179] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:30.965 [2024-07-15 19:49:21.650190] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:30.965 [2024-07-15 19:49:21.650201] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:30.965 [2024-07-15 19:49:21.650211] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:30.965 [2024-07-15 19:49:21.650221] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:30.965 [2024-07-15 19:49:21.650230] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:30.965 [2024-07-15 19:49:21.650240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.650254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:30.965 [2024-07-15 19:49:21.650264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:27:30.965 [2024-07-15 19:49:21.650273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.650341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-07-15 19:49:21.650352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:30.965 [2024-07-15 19:49:21.650362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:30.965 [2024-07-15 19:49:21.650372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-07-15 19:49:21.650455] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:30.965 [2024-07-15 19:49:21.650472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:30.965 [2024-07-15 19:49:21.650485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.965 [2024-07-15 19:49:21.650495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.965 [2024-07-15 19:49:21.650506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:30.965 [2024-07-15 19:49:21.650536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:30.965 [2024-07-15 19:49:21.650546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:30.965 [2024-07-15 19:49:21.650556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:30.966 [2024-07-15 19:49:21.650565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.966 [2024-07-15 19:49:21.650585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:30.966 [2024-07-15 19:49:21.650595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:30.966 [2024-07-15 19:49:21.650603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.966 [2024-07-15 19:49:21.650613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:30.966 [2024-07-15 19:49:21.650622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:30.966 [2024-07-15 19:49:21.650631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:30.966 [2024-07-15 19:49:21.650650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:30.966 [2024-07-15 19:49:21.650658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:30.966 [2024-07-15 19:49:21.650688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.966 [2024-07-15 19:49:21.650707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:30.966 [2024-07-15 19:49:21.650716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.966 [2024-07-15 19:49:21.650734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:30.966 [2024-07-15 19:49:21.650744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.966 [2024-07-15 19:49:21.650762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:30.966 [2024-07-15 19:49:21.650771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.966 [2024-07-15 19:49:21.650809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:30.966 [2024-07-15 19:49:21.650818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.966 [2024-07-15 19:49:21.650837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:30.966 [2024-07-15 19:49:21.650846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:30.966 [2024-07-15 19:49:21.650855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.966 [2024-07-15 19:49:21.650865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:30.966 [2024-07-15 19:49:21.650875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:30.966 [2024-07-15 19:49:21.650884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:30.966 [2024-07-15 19:49:21.650903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:30.966 [2024-07-15 19:49:21.650912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650921] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:30.966 [2024-07-15 19:49:21.650931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:30.966 [2024-07-15 19:49:21.650940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.966 [2024-07-15 19:49:21.650950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.966 [2024-07-15 19:49:21.650960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:30.966 [2024-07-15 19:49:21.650970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:30.966 [2024-07-15 19:49:21.650980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:30.966 [2024-07-15 19:49:21.650989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:30.966 [2024-07-15 19:49:21.650998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:30.966 [2024-07-15 19:49:21.651007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:30.966 [2024-07-15 19:49:21.651018] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:30.966 [2024-07-15 19:49:21.651034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:30.966 [2024-07-15 19:49:21.651057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:30.966 [2024-07-15 19:49:21.651067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:30.966 [2024-07-15 19:49:21.651077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:30.966 [2024-07-15 19:49:21.651088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:30.966 [2024-07-15 19:49:21.651098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:30.966 [2024-07-15 19:49:21.651108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:30.966 [2024-07-15 19:49:21.651118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:30.966 [2024-07-15 19:49:21.651128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:30.966 [2024-07-15 19:49:21.651138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:30.966 [2024-07-15 19:49:21.651190] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:30.966 [2024-07-15 19:49:21.651200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.966 [2024-07-15 19:49:21.651221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:30.966 [2024-07-15 19:49:21.651231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:30.966 [2024-07-15 19:49:21.651242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:30.966 [2024-07-15 19:49:21.651252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.966 [2024-07-15 19:49:21.651266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:30.966 [2024-07-15 19:49:21.651276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:27:30.966 [2024-07-15 19:49:21.651286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.966 [2024-07-15 19:49:21.706060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.966 [2024-07-15 19:49:21.706104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:30.966 [2024-07-15 19:49:21.706118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.727 ms 00:27:30.966 [2024-07-15 19:49:21.706144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.966 [2024-07-15 19:49:21.706226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.966 [2024-07-15 19:49:21.706237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:30.966 [2024-07-15 19:49:21.706248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:30.966 [2024-07-15 19:49:21.706257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.755693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.755886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.225 [2024-07-15 19:49:21.755969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.376 ms 00:27:31.225 [2024-07-15 19:49:21.756007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.756067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.756099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.225 [2024-07-15 19:49:21.756130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:31.225 [2024-07-15 19:49:21.756159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.756696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.756810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.225 [2024-07-15 19:49:21.756883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:27:31.225 [2024-07-15 19:49:21.756918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.757064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.757171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.225 [2024-07-15 19:49:21.757245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:27:31.225 [2024-07-15 19:49:21.757257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.777587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.777745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.225 [2024-07-15 19:49:21.777872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.304 ms 00:27:31.225 [2024-07-15 19:49:21.777913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.797649] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:31.225 [2024-07-15 19:49:21.797854] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:31.225 [2024-07-15 19:49:21.797950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.797982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:31.225 [2024-07-15 19:49:21.798013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.909 ms 00:27:31.225 [2024-07-15 19:49:21.798043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.828066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.828228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:31.225 [2024-07-15 19:49:21.828304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.949 ms 00:27:31.225 [2024-07-15 19:49:21.828347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.846980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.225 [2024-07-15 19:49:21.847138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:31.225 [2024-07-15 19:49:21.847208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:27:31.225 [2024-07-15 19:49:21.847241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.225 [2024-07-15 19:49:21.866179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.866338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:31.226 [2024-07-15 19:49:21.866410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.884 ms 00:27:31.226 [2024-07-15 19:49:21.866444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.867305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.867427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:31.226 [2024-07-15 19:49:21.867500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:27:31.226 [2024-07-15 19:49:21.867534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.956067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.956350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:31.226 [2024-07-15 19:49:21.956473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.487 ms 00:27:31.226 [2024-07-15 19:49:21.956511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.968364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:31.226 [2024-07-15 19:49:21.971320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.971446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:31.226 [2024-07-15 19:49:21.971518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:27:31.226 [2024-07-15 19:49:21.971552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.971658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.971694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:31.226 [2024-07-15 19:49:21.971725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:31.226 [2024-07-15 19:49:21.971754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.973331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.973460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:31.226 [2024-07-15 19:49:21.973528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:27:31.226 [2024-07-15 19:49:21.973561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.973617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.973649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:31.226 [2024-07-15 19:49:21.973679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:31.226 [2024-07-15 19:49:21.973708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:21.973761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:31.226 [2024-07-15 19:49:21.973905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:21.973956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:31.226 [2024-07-15 19:49:21.973990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:27:31.226 [2024-07-15 19:49:21.974019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:22.014782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:22.014923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:31.226 [2024-07-15 19:49:22.014945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.709 ms 00:27:31.226 [2024-07-15 19:49:22.014956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.226 [2024-07-15 19:49:22.015021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.226 [2024-07-15 19:49:22.015042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:31.226 [2024-07-15 19:49:22.015054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:31.226 [2024-07-15 19:49:22.015064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-07-15 19:49:22.020705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.408 ms, result 0 00:27:59.369  Copying: 1124/1048576 [kB] (1124 kBps) Copying: 5028/1048576 [kB] (3904 kBps) Copying: 35/1024 [MB] (31 MBps) Copying: 75/1024 [MB] (39 MBps) Copying: 115/1024 [MB] (39 MBps) Copying: 156/1024 [MB] (40 MBps) Copying: 198/1024 [MB] (41 MBps) Copying: 239/1024 [MB] (41 MBps) Copying: 280/1024 [MB] (40 MBps) Copying: 321/1024 [MB] (40 MBps) Copying: 361/1024 [MB] (40 MBps) Copying: 401/1024 [MB] (40 MBps) Copying: 442/1024 [MB] (40 MBps) Copying: 484/1024 [MB] (41 MBps) Copying: 524/1024 [MB] (39 MBps) Copying: 563/1024 [MB] (39 MBps) Copying: 604/1024 [MB] (40 MBps) Copying: 642/1024 [MB] (38 MBps) Copying: 681/1024 [MB] (39 MBps) Copying: 722/1024 [MB] (40 MBps) Copying: 762/1024 [MB] (40 MBps) Copying: 803/1024 [MB] (40 MBps) Copying: 842/1024 [MB] (39 MBps) Copying: 878/1024 [MB] (36 MBps) Copying: 917/1024 [MB] (38 MBps) Copying: 957/1024 [MB] (39 MBps) Copying: 996/1024 [MB] (39 MBps) Copying: 1024/1024 [MB] (average 36 MBps)[2024-07-15 19:49:50.073535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.073618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:59.369 [2024-07-15 19:49:50.073643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:59.369 [2024-07-15 19:49:50.073661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.369 [2024-07-15 19:49:50.073706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:59.369 [2024-07-15 19:49:50.079102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.079152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:59.369 [2024-07-15 19:49:50.079174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.351 ms 00:27:59.369 [2024-07-15 19:49:50.079191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.369 [2024-07-15 19:49:50.079654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.079686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:59.369 [2024-07-15 19:49:50.079705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:27:59.369 [2024-07-15 19:49:50.079722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.369 [2024-07-15 19:49:50.092328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.092383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:59.369 [2024-07-15 19:49:50.092401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.569 ms 00:27:59.369 [2024-07-15 19:49:50.092414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.369 [2024-07-15 19:49:50.097660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.097696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:59.369 [2024-07-15 19:49:50.097709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.207 ms 00:27:59.369 [2024-07-15 19:49:50.097719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.369 [2024-07-15 19:49:50.137797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.137834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:59.369 [2024-07-15 19:49:50.137848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.986 ms 00:27:59.369 [2024-07-15 19:49:50.137858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.369 [2024-07-15 19:49:50.159180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.369 [2024-07-15 19:49:50.159224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:59.369 [2024-07-15 19:49:50.159237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.284 ms 00:27:59.369 [2024-07-15 19:49:50.159247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.629 [2024-07-15 19:49:50.163049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.629 [2024-07-15 19:49:50.163088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:59.629 [2024-07-15 19:49:50.163100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.760 ms 00:27:59.629 [2024-07-15 19:49:50.163111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.629 [2024-07-15 19:49:50.201583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.629 [2024-07-15 19:49:50.201621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:59.629 [2024-07-15 19:49:50.201634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.456 ms 00:27:59.629 [2024-07-15 19:49:50.201644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.629 [2024-07-15 19:49:50.241192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.629 [2024-07-15 19:49:50.241226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:59.629 [2024-07-15 19:49:50.241240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.510 ms 00:27:59.629 [2024-07-15 19:49:50.241250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.629 [2024-07-15 19:49:50.280023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.629 [2024-07-15 19:49:50.280055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:59.629 [2024-07-15 19:49:50.280068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.735 ms 00:27:59.629 [2024-07-15 19:49:50.280090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.629 [2024-07-15 19:49:50.319748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.629 [2024-07-15 19:49:50.319798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:59.629 [2024-07-15 19:49:50.319812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.580 ms 00:27:59.629 [2024-07-15 19:49:50.319823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.629 [2024-07-15 19:49:50.319860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:59.629 [2024-07-15 19:49:50.319878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:59.629 [2024-07-15 19:49:50.319891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:27:59.629 [2024-07-15 19:49:50.319903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.319989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:59.629 [2024-07-15 19:49:50.320466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:59.630 [2024-07-15 19:49:50.320961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:59.630 [2024-07-15 19:49:50.320972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8cd2928b-48a6-4d1e-8e3e-dde3bed8b2dd 00:27:59.630 [2024-07-15 19:49:50.320983] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:27:59.630 [2024-07-15 19:49:50.320993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164032 00:27:59.630 [2024-07-15 19:49:50.321003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162048 00:27:59.630 [2024-07-15 19:49:50.321019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:27:59.630 [2024-07-15 19:49:50.321029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:59.630 [2024-07-15 19:49:50.321042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:59.630 [2024-07-15 19:49:50.321052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:59.630 [2024-07-15 19:49:50.321061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:59.630 [2024-07-15 19:49:50.321070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:59.630 [2024-07-15 19:49:50.321081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.630 [2024-07-15 19:49:50.321091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:59.630 [2024-07-15 19:49:50.321101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:27:59.630 [2024-07-15 19:49:50.321111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.630 [2024-07-15 19:49:50.342447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.630 [2024-07-15 19:49:50.342482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:59.630 [2024-07-15 19:49:50.342495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.304 ms 00:27:59.630 [2024-07-15 19:49:50.342520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.630 [2024-07-15 19:49:50.343114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.630 [2024-07-15 19:49:50.343132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:59.630 [2024-07-15 19:49:50.343143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:27:59.630 [2024-07-15 19:49:50.343153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.630 [2024-07-15 19:49:50.389943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.630 [2024-07-15 19:49:50.389979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.630 [2024-07-15 19:49:50.389996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.630 [2024-07-15 19:49:50.390023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.630 [2024-07-15 19:49:50.390072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.630 [2024-07-15 19:49:50.390083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.630 [2024-07-15 19:49:50.390094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.630 [2024-07-15 19:49:50.390103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.630 [2024-07-15 19:49:50.390162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.630 [2024-07-15 19:49:50.390175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.630 [2024-07-15 19:49:50.390185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.630 [2024-07-15 19:49:50.390200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.630 [2024-07-15 19:49:50.390217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.630 [2024-07-15 19:49:50.390227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.630 [2024-07-15 19:49:50.390237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.630 [2024-07-15 19:49:50.390247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.513743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.513804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:59.889 [2024-07-15 19:49:50.513825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.513835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:59.889 [2024-07-15 19:49:50.617170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:59.889 [2024-07-15 19:49:50.617267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:59.889 [2024-07-15 19:49:50.617339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:59.889 [2024-07-15 19:49:50.617471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:59.889 [2024-07-15 19:49:50.617538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:59.889 [2024-07-15 19:49:50.617603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.889 [2024-07-15 19:49:50.617666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:59.889 [2024-07-15 19:49:50.617676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.889 [2024-07-15 19:49:50.617685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.889 [2024-07-15 19:49:50.617854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.242 ms, result 0 00:28:01.262 00:28:01.262 00:28:01.262 19:49:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:03.271 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:03.271 19:49:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:03.271 [2024-07-15 19:49:53.711358] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:03.271 [2024-07-15 19:49:53.711481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85528 ] 00:28:03.271 [2024-07-15 19:49:53.878959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.530 [2024-07-15 19:49:54.168304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.788 [2024-07-15 19:49:54.569540] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.788 [2024-07-15 19:49:54.569608] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.046 [2024-07-15 19:49:54.731242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.731295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:04.046 [2024-07-15 19:49:54.731311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:04.046 [2024-07-15 19:49:54.731322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.731373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.731387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.046 [2024-07-15 19:49:54.731398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:04.046 [2024-07-15 19:49:54.731411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.731439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:04.046 [2024-07-15 19:49:54.732567] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:04.046 [2024-07-15 19:49:54.732594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.732609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.046 [2024-07-15 19:49:54.732620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:28:04.046 [2024-07-15 19:49:54.732629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.734033] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:04.046 [2024-07-15 19:49:54.754190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.754226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:04.046 [2024-07-15 19:49:54.754240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.157 ms 00:28:04.046 [2024-07-15 19:49:54.754266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.754328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.754340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:04.046 [2024-07-15 19:49:54.754354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:04.046 [2024-07-15 19:49:54.754364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.760994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.761019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.046 [2024-07-15 19:49:54.761030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.563 ms 00:28:04.046 [2024-07-15 19:49:54.761056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.761131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.761148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.046 [2024-07-15 19:49:54.761158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:04.046 [2024-07-15 19:49:54.761168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.761209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.761221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:04.046 [2024-07-15 19:49:54.761231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:04.046 [2024-07-15 19:49:54.761241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.761265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:04.046 [2024-07-15 19:49:54.766842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.766891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.046 [2024-07-15 19:49:54.766904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:28:04.046 [2024-07-15 19:49:54.766913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.766952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.046 [2024-07-15 19:49:54.766963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:04.046 [2024-07-15 19:49:54.766974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:04.046 [2024-07-15 19:49:54.766984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.046 [2024-07-15 19:49:54.767033] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:04.046 [2024-07-15 19:49:54.767058] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:04.046 [2024-07-15 19:49:54.767091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:04.046 [2024-07-15 19:49:54.767111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:04.046 [2024-07-15 19:49:54.767195] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:04.046 [2024-07-15 19:49:54.767208] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:04.046 [2024-07-15 19:49:54.767221] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:04.046 [2024-07-15 19:49:54.767234] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:04.046 [2024-07-15 19:49:54.767245] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:04.046 [2024-07-15 19:49:54.767256] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:04.047 [2024-07-15 19:49:54.767266] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:04.047 [2024-07-15 19:49:54.767275] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:04.047 [2024-07-15 19:49:54.767285] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:04.047 [2024-07-15 19:49:54.767295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.047 [2024-07-15 19:49:54.767308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:04.047 [2024-07-15 19:49:54.767318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:28:04.047 [2024-07-15 19:49:54.767328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.047 [2024-07-15 19:49:54.767394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.047 [2024-07-15 19:49:54.767405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:04.047 [2024-07-15 19:49:54.767415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:04.047 [2024-07-15 19:49:54.767424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.047 [2024-07-15 19:49:54.767501] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:04.047 [2024-07-15 19:49:54.767513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:04.047 [2024-07-15 19:49:54.767526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:04.047 [2024-07-15 19:49:54.767556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:04.047 [2024-07-15 19:49:54.767585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.047 [2024-07-15 19:49:54.767603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:04.047 [2024-07-15 19:49:54.767613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:04.047 [2024-07-15 19:49:54.767622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.047 [2024-07-15 19:49:54.767632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:04.047 [2024-07-15 19:49:54.767641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:04.047 [2024-07-15 19:49:54.767650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:04.047 [2024-07-15 19:49:54.767668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:04.047 [2024-07-15 19:49:54.767705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:04.047 [2024-07-15 19:49:54.767733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:04.047 [2024-07-15 19:49:54.767760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:04.047 [2024-07-15 19:49:54.767805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:04.047 [2024-07-15 19:49:54.767833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.047 [2024-07-15 19:49:54.767851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:04.047 [2024-07-15 19:49:54.767860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:04.047 [2024-07-15 19:49:54.767869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.047 [2024-07-15 19:49:54.767895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:04.047 [2024-07-15 19:49:54.767904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:04.047 [2024-07-15 19:49:54.767913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:04.047 [2024-07-15 19:49:54.767947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:04.047 [2024-07-15 19:49:54.767957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.767967] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:04.047 [2024-07-15 19:49:54.767977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:04.047 [2024-07-15 19:49:54.767987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.047 [2024-07-15 19:49:54.767996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.047 [2024-07-15 19:49:54.768007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:04.047 [2024-07-15 19:49:54.768016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:04.047 [2024-07-15 19:49:54.768025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:04.047 [2024-07-15 19:49:54.768044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:04.047 [2024-07-15 19:49:54.768053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:04.047 [2024-07-15 19:49:54.768062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:04.047 [2024-07-15 19:49:54.768073] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:04.047 [2024-07-15 19:49:54.768085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:04.047 [2024-07-15 19:49:54.768107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:04.047 [2024-07-15 19:49:54.768118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:04.047 [2024-07-15 19:49:54.768128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:04.047 [2024-07-15 19:49:54.768138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:04.047 [2024-07-15 19:49:54.768149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:04.047 [2024-07-15 19:49:54.768159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:04.047 [2024-07-15 19:49:54.768169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:04.047 [2024-07-15 19:49:54.768179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:04.047 [2024-07-15 19:49:54.768189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:04.047 [2024-07-15 19:49:54.768241] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:04.047 [2024-07-15 19:49:54.768252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.047 [2024-07-15 19:49:54.768273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:04.047 [2024-07-15 19:49:54.768284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:04.047 [2024-07-15 19:49:54.768294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:04.047 [2024-07-15 19:49:54.768306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.047 [2024-07-15 19:49:54.768320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:04.047 [2024-07-15 19:49:54.768330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:28:04.047 [2024-07-15 19:49:54.768340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.047 [2024-07-15 19:49:54.823740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.047 [2024-07-15 19:49:54.823793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.047 [2024-07-15 19:49:54.823807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.351 ms 00:28:04.047 [2024-07-15 19:49:54.823818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.047 [2024-07-15 19:49:54.823906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.047 [2024-07-15 19:49:54.823918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:04.047 [2024-07-15 19:49:54.823928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:04.047 [2024-07-15 19:49:54.823955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.875803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.875835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.306 [2024-07-15 19:49:54.875849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.789 ms 00:28:04.306 [2024-07-15 19:49:54.875859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.875897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.875908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.306 [2024-07-15 19:49:54.875918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:04.306 [2024-07-15 19:49:54.875927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.876381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.876394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.306 [2024-07-15 19:49:54.876405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:28:04.306 [2024-07-15 19:49:54.876415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.876529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.876542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.306 [2024-07-15 19:49:54.876552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:04.306 [2024-07-15 19:49:54.876562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.897384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.897419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.306 [2024-07-15 19:49:54.897433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.800 ms 00:28:04.306 [2024-07-15 19:49:54.897443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.917903] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:04.306 [2024-07-15 19:49:54.917945] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:04.306 [2024-07-15 19:49:54.917961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.917972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:04.306 [2024-07-15 19:49:54.917984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.405 ms 00:28:04.306 [2024-07-15 19:49:54.917994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.949235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.949277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:04.306 [2024-07-15 19:49:54.949292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.197 ms 00:28:04.306 [2024-07-15 19:49:54.949309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.968230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.968263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:04.306 [2024-07-15 19:49:54.968277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.874 ms 00:28:04.306 [2024-07-15 19:49:54.968287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.987851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.987892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:04.306 [2024-07-15 19:49:54.987905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.527 ms 00:28:04.306 [2024-07-15 19:49:54.987914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:54.988829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:54.988854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:04.306 [2024-07-15 19:49:54.988867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:28:04.306 [2024-07-15 19:49:54.988877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:55.077677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:55.077756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:04.306 [2024-07-15 19:49:55.077773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.778 ms 00:28:04.306 [2024-07-15 19:49:55.077813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:55.089258] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:04.306 [2024-07-15 19:49:55.092301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:55.092332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:04.306 [2024-07-15 19:49:55.092346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.413 ms 00:28:04.306 [2024-07-15 19:49:55.092356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:55.092445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:55.092457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:04.306 [2024-07-15 19:49:55.092469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:04.306 [2024-07-15 19:49:55.092478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:55.093373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:55.093398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:04.306 [2024-07-15 19:49:55.093410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:28:04.306 [2024-07-15 19:49:55.093420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:55.093443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:55.093455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:04.306 [2024-07-15 19:49:55.093465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:04.306 [2024-07-15 19:49:55.093474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.306 [2024-07-15 19:49:55.093506] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:04.306 [2024-07-15 19:49:55.093518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.306 [2024-07-15 19:49:55.093529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:04.306 [2024-07-15 19:49:55.093542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:04.306 [2024-07-15 19:49:55.093551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.565 [2024-07-15 19:49:55.131438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.565 [2024-07-15 19:49:55.131477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:04.565 [2024-07-15 19:49:55.131491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.867 ms 00:28:04.565 [2024-07-15 19:49:55.131501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.565 [2024-07-15 19:49:55.131571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.565 [2024-07-15 19:49:55.131592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:04.565 [2024-07-15 19:49:55.131602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:04.565 [2024-07-15 19:49:55.131612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.565 [2024-07-15 19:49:55.132757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.996 ms, result 0 00:28:35.335  Copying: 34/1024 [MB] (34 MBps) Copying: 67/1024 [MB] (32 MBps) Copying: 100/1024 [MB] (33 MBps) Copying: 134/1024 [MB] (33 MBps) Copying: 167/1024 [MB] (32 MBps) Copying: 200/1024 [MB] (32 MBps) Copying: 234/1024 [MB] (34 MBps) Copying: 267/1024 [MB] (33 MBps) Copying: 300/1024 [MB] (33 MBps) Copying: 334/1024 [MB] (33 MBps) Copying: 368/1024 [MB] (34 MBps) Copying: 404/1024 [MB] (35 MBps) Copying: 438/1024 [MB] (34 MBps) Copying: 473/1024 [MB] (34 MBps) Copying: 507/1024 [MB] (34 MBps) Copying: 540/1024 [MB] (33 MBps) Copying: 573/1024 [MB] (33 MBps) Copying: 606/1024 [MB] (32 MBps) Copying: 639/1024 [MB] (32 MBps) Copying: 671/1024 [MB] (32 MBps) Copying: 705/1024 [MB] (33 MBps) Copying: 738/1024 [MB] (33 MBps) Copying: 772/1024 [MB] (33 MBps) Copying: 805/1024 [MB] (33 MBps) Copying: 839/1024 [MB] (33 MBps) Copying: 870/1024 [MB] (31 MBps) Copying: 900/1024 [MB] (29 MBps) Copying: 931/1024 [MB] (31 MBps) Copying: 969/1024 [MB] (37 MBps) Copying: 1005/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-07-15 19:50:26.073668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.336 [2024-07-15 19:50:26.073991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:35.336 [2024-07-15 19:50:26.074131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.336 [2024-07-15 19:50:26.074187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.336 [2024-07-15 19:50:26.074271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:35.336 [2024-07-15 19:50:26.080181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.336 [2024-07-15 19:50:26.080334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:35.336 [2024-07-15 19:50:26.080421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.693 ms 00:28:35.336 [2024-07-15 19:50:26.080458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.336 [2024-07-15 19:50:26.080691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.336 [2024-07-15 19:50:26.080741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:35.336 [2024-07-15 19:50:26.080803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:28:35.336 [2024-07-15 19:50:26.080837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.336 [2024-07-15 19:50:26.083924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.336 [2024-07-15 19:50:26.084046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:35.336 [2024-07-15 19:50:26.084124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:28:35.336 [2024-07-15 19:50:26.084159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.336 [2024-07-15 19:50:26.089342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.336 [2024-07-15 19:50:26.089378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:35.336 [2024-07-15 19:50:26.089397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:28:35.336 [2024-07-15 19:50:26.089408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.130713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.130788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:35.595 [2024-07-15 19:50:26.130808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.229 ms 00:28:35.595 [2024-07-15 19:50:26.130818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.153864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.153900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:35.595 [2024-07-15 19:50:26.153915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.990 ms 00:28:35.595 [2024-07-15 19:50:26.153926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.157890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.157929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:35.595 [2024-07-15 19:50:26.157943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.919 ms 00:28:35.595 [2024-07-15 19:50:26.157960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.198185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.198222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:35.595 [2024-07-15 19:50:26.198236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.206 ms 00:28:35.595 [2024-07-15 19:50:26.198246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.236541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.236578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:35.595 [2024-07-15 19:50:26.236592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.255 ms 00:28:35.595 [2024-07-15 19:50:26.236618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.275894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.275948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:35.595 [2024-07-15 19:50:26.275973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.237 ms 00:28:35.595 [2024-07-15 19:50:26.275983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.315940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.315996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:35.595 [2024-07-15 19:50:26.316013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.875 ms 00:28:35.595 [2024-07-15 19:50:26.316024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.316072] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:35.595 [2024-07-15 19:50:26.316090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:35.595 [2024-07-15 19:50:26.316103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:28:35.595 [2024-07-15 19:50:26.316115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.316997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:35.595 [2024-07-15 19:50:26.317227] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:35.595 [2024-07-15 19:50:26.317237] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8cd2928b-48a6-4d1e-8e3e-dde3bed8b2dd 00:28:35.595 [2024-07-15 19:50:26.317248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:28:35.595 [2024-07-15 19:50:26.317258] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:35.595 [2024-07-15 19:50:26.317274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:35.595 [2024-07-15 19:50:26.317284] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:35.595 [2024-07-15 19:50:26.317293] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:35.595 [2024-07-15 19:50:26.317304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:35.595 [2024-07-15 19:50:26.317313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:35.595 [2024-07-15 19:50:26.317322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:35.595 [2024-07-15 19:50:26.317332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:35.595 [2024-07-15 19:50:26.317341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.317351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:35.595 [2024-07-15 19:50:26.317362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:28:35.595 [2024-07-15 19:50:26.317372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.338278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.338316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:35.595 [2024-07-15 19:50:26.338340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.859 ms 00:28:35.595 [2024-07-15 19:50:26.338350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.338848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.595 [2024-07-15 19:50:26.338861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:35.595 [2024-07-15 19:50:26.338871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:28:35.595 [2024-07-15 19:50:26.338882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.384090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.595 [2024-07-15 19:50:26.384148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.595 [2024-07-15 19:50:26.384163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.595 [2024-07-15 19:50:26.384173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.384245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.595 [2024-07-15 19:50:26.384257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.595 [2024-07-15 19:50:26.384267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.595 [2024-07-15 19:50:26.384277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.384362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.595 [2024-07-15 19:50:26.384375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.595 [2024-07-15 19:50:26.384395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.595 [2024-07-15 19:50:26.384405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.595 [2024-07-15 19:50:26.384421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.595 [2024-07-15 19:50:26.384432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.595 [2024-07-15 19:50:26.384442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.596 [2024-07-15 19:50:26.384452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.853 [2024-07-15 19:50:26.509343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.853 [2024-07-15 19:50:26.509404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.853 [2024-07-15 19:50:26.509419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.853 [2024-07-15 19:50:26.509430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.853 [2024-07-15 19:50:26.615311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.853 [2024-07-15 19:50:26.615372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.853 [2024-07-15 19:50:26.615388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.853 [2024-07-15 19:50:26.615398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.853 [2024-07-15 19:50:26.615482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.853 [2024-07-15 19:50:26.615501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.854 [2024-07-15 19:50:26.615512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.854 [2024-07-15 19:50:26.615523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.854 [2024-07-15 19:50:26.615563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.854 [2024-07-15 19:50:26.615575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.854 [2024-07-15 19:50:26.615586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.854 [2024-07-15 19:50:26.615597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.854 [2024-07-15 19:50:26.615707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.854 [2024-07-15 19:50:26.615725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.854 [2024-07-15 19:50:26.615735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.854 [2024-07-15 19:50:26.615745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.854 [2024-07-15 19:50:26.615781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.854 [2024-07-15 19:50:26.615793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:35.854 [2024-07-15 19:50:26.615803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.854 [2024-07-15 19:50:26.615835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.854 [2024-07-15 19:50:26.615872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.854 [2024-07-15 19:50:26.615884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.854 [2024-07-15 19:50:26.615898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.854 [2024-07-15 19:50:26.615908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.854 [2024-07-15 19:50:26.615950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.854 [2024-07-15 19:50:26.615961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.854 [2024-07-15 19:50:26.615972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.854 [2024-07-15 19:50:26.615982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.854 [2024-07-15 19:50:26.616104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.414 ms, result 0 00:28:37.225 00:28:37.225 00:28:37.225 19:50:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:39.139 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:39.139 19:50:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:39.139 19:50:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:39.139 19:50:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:39.139 19:50:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:39.139 19:50:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:39.396 19:50:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:39.396 19:50:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:39.396 Process with pid 83904 is not found 00:28:39.396 19:50:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83904 00:28:39.396 19:50:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83904 ']' 00:28:39.396 19:50:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83904 00:28:39.397 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83904) - No such process 00:28:39.397 19:50:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83904 is not found' 00:28:39.397 19:50:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:39.654 Remove shared memory files 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:39.654 00:28:39.654 real 3m12.674s 00:28:39.654 user 3m40.465s 00:28:39.654 sys 0m35.240s 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.654 19:50:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.654 ************************************ 00:28:39.654 END TEST ftl_dirty_shutdown 00:28:39.654 ************************************ 00:28:39.654 19:50:30 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:39.654 19:50:30 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:39.654 19:50:30 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:39.654 19:50:30 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.654 19:50:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:39.654 ************************************ 00:28:39.654 START TEST ftl_upgrade_shutdown 00:28:39.654 ************************************ 00:28:39.654 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:39.654 * Looking for test storage... 00:28:39.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:39.654 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:39.654 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:39.654 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85954 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85954 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85954 ']' 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.912 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.913 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.913 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.913 19:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 [2024-07-15 19:50:30.596000] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:39.913 [2024-07-15 19:50:30.596412] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85954 ] 00:28:40.173 [2024-07-15 19:50:30.784839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.439 [2024-07-15 19:50:31.087394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:41.371 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:41.629 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:41.886 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:41.886 { 00:28:41.886 "name": "basen1", 00:28:41.886 "aliases": [ 00:28:41.886 "ef47f078-a240-46eb-a251-995cb81194a9" 00:28:41.886 ], 00:28:41.886 "product_name": "NVMe disk", 00:28:41.886 "block_size": 4096, 00:28:41.886 "num_blocks": 1310720, 00:28:41.886 "uuid": "ef47f078-a240-46eb-a251-995cb81194a9", 00:28:41.886 "assigned_rate_limits": { 00:28:41.886 "rw_ios_per_sec": 0, 00:28:41.886 "rw_mbytes_per_sec": 0, 00:28:41.886 "r_mbytes_per_sec": 0, 00:28:41.886 "w_mbytes_per_sec": 0 00:28:41.886 }, 00:28:41.886 "claimed": true, 00:28:41.886 "claim_type": "read_many_write_one", 00:28:41.886 "zoned": false, 00:28:41.886 "supported_io_types": { 00:28:41.886 "read": true, 00:28:41.886 "write": true, 00:28:41.886 "unmap": true, 00:28:41.887 "flush": true, 00:28:41.887 "reset": true, 00:28:41.887 "nvme_admin": true, 00:28:41.887 "nvme_io": true, 00:28:41.887 "nvme_io_md": false, 00:28:41.887 "write_zeroes": true, 00:28:41.887 "zcopy": false, 00:28:41.887 "get_zone_info": false, 00:28:41.887 "zone_management": false, 00:28:41.887 "zone_append": false, 00:28:41.887 "compare": true, 00:28:41.887 "compare_and_write": false, 00:28:41.887 "abort": true, 00:28:41.887 "seek_hole": false, 00:28:41.887 "seek_data": false, 00:28:41.887 "copy": true, 00:28:41.887 "nvme_iov_md": false 00:28:41.887 }, 00:28:41.887 "driver_specific": { 00:28:41.887 "nvme": [ 00:28:41.887 { 00:28:41.887 "pci_address": "0000:00:11.0", 00:28:41.887 "trid": { 00:28:41.887 "trtype": "PCIe", 00:28:41.887 "traddr": "0000:00:11.0" 00:28:41.887 }, 00:28:41.887 "ctrlr_data": { 00:28:41.887 "cntlid": 0, 00:28:41.887 "vendor_id": "0x1b36", 00:28:41.887 "model_number": "QEMU NVMe Ctrl", 00:28:41.887 "serial_number": "12341", 00:28:41.887 "firmware_revision": "8.0.0", 00:28:41.887 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:41.887 "oacs": { 00:28:41.887 "security": 0, 00:28:41.887 "format": 1, 00:28:41.887 "firmware": 0, 00:28:41.887 "ns_manage": 1 00:28:41.887 }, 00:28:41.887 "multi_ctrlr": false, 00:28:41.887 "ana_reporting": false 00:28:41.887 }, 00:28:41.887 "vs": { 00:28:41.887 "nvme_version": "1.4" 00:28:41.887 }, 00:28:41.887 "ns_data": { 00:28:41.887 "id": 1, 00:28:41.887 "can_share": false 00:28:41.887 } 00:28:41.887 } 00:28:41.887 ], 00:28:41.887 "mp_policy": "active_passive" 00:28:41.887 } 00:28:41.887 } 00:28:41.887 ]' 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:41.887 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:42.145 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ed23df3e-48b2-4522-bc4b-e562f768ad13 00:28:42.145 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:42.145 19:50:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed23df3e-48b2-4522-bc4b-e562f768ad13 00:28:42.403 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:42.661 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=16421c72-177f-4b66-a93e-559d5549e83b 00:28:42.661 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 16421c72-177f-4b66-a93e-559d5549e83b 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=7c628ece-b63f-4cdf-a743-ebdebbc722e9 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 7c628ece-b63f-4cdf-a743-ebdebbc722e9 ]] 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 7c628ece-b63f-4cdf-a743-ebdebbc722e9 5120 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=7c628ece-b63f-4cdf-a743-ebdebbc722e9 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7c628ece-b63f-4cdf-a743-ebdebbc722e9 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7c628ece-b63f-4cdf-a743-ebdebbc722e9 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c628ece-b63f-4cdf-a743-ebdebbc722e9 00:28:42.919 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:42.919 { 00:28:42.919 "name": "7c628ece-b63f-4cdf-a743-ebdebbc722e9", 00:28:42.919 "aliases": [ 00:28:42.919 "lvs/basen1p0" 00:28:42.919 ], 00:28:42.919 "product_name": "Logical Volume", 00:28:42.919 "block_size": 4096, 00:28:42.919 "num_blocks": 5242880, 00:28:42.919 "uuid": "7c628ece-b63f-4cdf-a743-ebdebbc722e9", 00:28:42.919 "assigned_rate_limits": { 00:28:42.919 "rw_ios_per_sec": 0, 00:28:42.919 "rw_mbytes_per_sec": 0, 00:28:42.919 "r_mbytes_per_sec": 0, 00:28:42.919 "w_mbytes_per_sec": 0 00:28:42.919 }, 00:28:42.919 "claimed": false, 00:28:42.919 "zoned": false, 00:28:42.919 "supported_io_types": { 00:28:42.919 "read": true, 00:28:42.919 "write": true, 00:28:42.919 "unmap": true, 00:28:42.919 "flush": false, 00:28:42.919 "reset": true, 00:28:42.920 "nvme_admin": false, 00:28:42.920 "nvme_io": false, 00:28:42.920 "nvme_io_md": false, 00:28:42.920 "write_zeroes": true, 00:28:42.920 "zcopy": false, 00:28:42.920 "get_zone_info": false, 00:28:42.920 "zone_management": false, 00:28:42.920 "zone_append": false, 00:28:42.920 "compare": false, 00:28:42.920 "compare_and_write": false, 00:28:42.920 "abort": false, 00:28:42.920 "seek_hole": true, 00:28:42.920 "seek_data": true, 00:28:42.920 "copy": false, 00:28:42.920 "nvme_iov_md": false 00:28:42.920 }, 00:28:42.920 "driver_specific": { 00:28:42.920 "lvol": { 00:28:42.920 "lvol_store_uuid": "16421c72-177f-4b66-a93e-559d5549e83b", 00:28:42.920 "base_bdev": "basen1", 00:28:42.920 "thin_provision": true, 00:28:42.920 "num_allocated_clusters": 0, 00:28:42.920 "snapshot": false, 00:28:42.920 "clone": false, 00:28:42.920 "esnap_clone": false 00:28:42.920 } 00:28:42.920 } 00:28:42.920 } 00:28:42.920 ]' 00:28:42.920 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:42.920 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:42.920 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:43.177 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:28:43.177 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:28:43.177 19:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:28:43.177 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:43.177 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:43.177 19:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:43.435 19:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:43.435 19:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:43.435 19:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:43.693 19:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:43.693 19:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:43.693 19:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 7c628ece-b63f-4cdf-a743-ebdebbc722e9 -c cachen1p0 --l2p_dram_limit 2 00:28:43.952 [2024-07-15 19:50:34.549964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.550024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:43.952 [2024-07-15 19:50:34.550056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:43.952 [2024-07-15 19:50:34.550070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.550141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.550156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:43.952 [2024-07-15 19:50:34.550167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:43.952 [2024-07-15 19:50:34.550180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.550201] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:43.952 [2024-07-15 19:50:34.551416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:43.952 [2024-07-15 19:50:34.551440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.551457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:43.952 [2024-07-15 19:50:34.551468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.243 ms 00:28:43.952 [2024-07-15 19:50:34.551481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.551605] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID ccc74b4e-8a2a-4a39-901a-daebb00cb136 00:28:43.952 [2024-07-15 19:50:34.553056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.553092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:43.952 [2024-07-15 19:50:34.553110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:43.952 [2024-07-15 19:50:34.553120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.560645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.560678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:43.952 [2024-07-15 19:50:34.560697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.479 ms 00:28:43.952 [2024-07-15 19:50:34.560707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.560757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.560771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:43.952 [2024-07-15 19:50:34.560801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:43.952 [2024-07-15 19:50:34.560812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.560893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.560906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:43.952 [2024-07-15 19:50:34.560919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:43.952 [2024-07-15 19:50:34.560932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.560961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:43.952 [2024-07-15 19:50:34.566668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.566711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:43.952 [2024-07-15 19:50:34.566724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.717 ms 00:28:43.952 [2024-07-15 19:50:34.566736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.566768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.566793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:43.952 [2024-07-15 19:50:34.566805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:43.952 [2024-07-15 19:50:34.566818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.566854] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:43.952 [2024-07-15 19:50:34.566997] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:43.952 [2024-07-15 19:50:34.567011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:43.952 [2024-07-15 19:50:34.567030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:43.952 [2024-07-15 19:50:34.567043] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:43.952 [2024-07-15 19:50:34.567058] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:43.952 [2024-07-15 19:50:34.567069] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:43.952 [2024-07-15 19:50:34.567081] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:43.952 [2024-07-15 19:50:34.567095] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:43.952 [2024-07-15 19:50:34.567108] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:43.952 [2024-07-15 19:50:34.567118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.567131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:43.952 [2024-07-15 19:50:34.567141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:28:43.952 [2024-07-15 19:50:34.567154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.567226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.952 [2024-07-15 19:50:34.567239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:43.952 [2024-07-15 19:50:34.567249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:43.952 [2024-07-15 19:50:34.567261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.952 [2024-07-15 19:50:34.567351] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:43.952 [2024-07-15 19:50:34.567368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:43.952 [2024-07-15 19:50:34.567379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:43.952 [2024-07-15 19:50:34.567392] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.952 [2024-07-15 19:50:34.567402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:43.952 [2024-07-15 19:50:34.567413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:43.952 [2024-07-15 19:50:34.567435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:43.952 [2024-07-15 19:50:34.567447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:43.952 [2024-07-15 19:50:34.567457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:43.952 [2024-07-15 19:50:34.567469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.952 [2024-07-15 19:50:34.567478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:43.952 [2024-07-15 19:50:34.567493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:43.952 [2024-07-15 19:50:34.567502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.952 [2024-07-15 19:50:34.567514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:43.952 [2024-07-15 19:50:34.567523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:43.952 [2024-07-15 19:50:34.567534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.952 [2024-07-15 19:50:34.567544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:43.952 [2024-07-15 19:50:34.567558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:43.952 [2024-07-15 19:50:34.567567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.952 [2024-07-15 19:50:34.567579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:43.952 [2024-07-15 19:50:34.567589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:43.952 [2024-07-15 19:50:34.567600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.952 [2024-07-15 19:50:34.567612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:43.952 [2024-07-15 19:50:34.567624] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:43.952 [2024-07-15 19:50:34.567633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.953 [2024-07-15 19:50:34.567645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:43.953 [2024-07-15 19:50:34.567655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:43.953 [2024-07-15 19:50:34.567666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.953 [2024-07-15 19:50:34.567675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:43.953 [2024-07-15 19:50:34.567687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:43.953 [2024-07-15 19:50:34.567697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.953 [2024-07-15 19:50:34.567709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:43.953 [2024-07-15 19:50:34.567718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:43.953 [2024-07-15 19:50:34.567732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.953 [2024-07-15 19:50:34.567741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:43.953 [2024-07-15 19:50:34.567752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:43.953 [2024-07-15 19:50:34.567761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.953 [2024-07-15 19:50:34.567774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:43.953 [2024-07-15 19:50:34.568004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:43.953 [2024-07-15 19:50:34.568044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.953 [2024-07-15 19:50:34.568075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:43.953 [2024-07-15 19:50:34.568107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:43.953 [2024-07-15 19:50:34.568136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.953 [2024-07-15 19:50:34.568167] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:43.953 [2024-07-15 19:50:34.568197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:43.953 [2024-07-15 19:50:34.568230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:43.953 [2024-07-15 19:50:34.568330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.953 [2024-07-15 19:50:34.568372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:43.953 [2024-07-15 19:50:34.568403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:43.953 [2024-07-15 19:50:34.568438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:43.953 [2024-07-15 19:50:34.568468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:43.953 [2024-07-15 19:50:34.568499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:43.953 [2024-07-15 19:50:34.568529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:43.953 [2024-07-15 19:50:34.568566] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:43.953 [2024-07-15 19:50:34.568720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.568791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:43.953 [2024-07-15 19:50:34.568843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.568893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.568998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:43.953 [2024-07-15 19:50:34.569173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:43.953 [2024-07-15 19:50:34.569223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:43.953 [2024-07-15 19:50:34.569275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:43.953 [2024-07-15 19:50:34.569323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:43.953 [2024-07-15 19:50:34.569546] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:43.953 [2024-07-15 19:50:34.569558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:43.953 [2024-07-15 19:50:34.569583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:43.953 [2024-07-15 19:50:34.569596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:43.953 [2024-07-15 19:50:34.569607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:43.953 [2024-07-15 19:50:34.569622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.953 [2024-07-15 19:50:34.569633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:43.953 [2024-07-15 19:50:34.569646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.320 ms 00:28:43.953 [2024-07-15 19:50:34.569656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.953 [2024-07-15 19:50:34.569713] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:43.953 [2024-07-15 19:50:34.569726] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:46.485 [2024-07-15 19:50:37.225627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.485 [2024-07-15 19:50:37.225695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:46.485 [2024-07-15 19:50:37.225716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2655.893 ms 00:28:46.485 [2024-07-15 19:50:37.225727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.485 [2024-07-15 19:50:37.269181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.485 [2024-07-15 19:50:37.269227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:46.485 [2024-07-15 19:50:37.269246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.139 ms 00:28:46.485 [2024-07-15 19:50:37.269265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.485 [2024-07-15 19:50:37.269379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.485 [2024-07-15 19:50:37.269392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:46.485 [2024-07-15 19:50:37.269405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:46.485 [2024-07-15 19:50:37.269419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.323798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.323842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:46.744 [2024-07-15 19:50:37.323860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.330 ms 00:28:46.744 [2024-07-15 19:50:37.323870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.323921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.323935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:46.744 [2024-07-15 19:50:37.323948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:46.744 [2024-07-15 19:50:37.323958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.324441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.324454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:46.744 [2024-07-15 19:50:37.324467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.409 ms 00:28:46.744 [2024-07-15 19:50:37.324477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.324526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.324539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:46.744 [2024-07-15 19:50:37.324555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:46.744 [2024-07-15 19:50:37.324565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.347926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.347973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:46.744 [2024-07-15 19:50:37.347991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.334 ms 00:28:46.744 [2024-07-15 19:50:37.348002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.362383] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:46.744 [2024-07-15 19:50:37.363480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.363513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:46.744 [2024-07-15 19:50:37.363528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.370 ms 00:28:46.744 [2024-07-15 19:50:37.363541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.402154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.402214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:46.744 [2024-07-15 19:50:37.402230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.570 ms 00:28:46.744 [2024-07-15 19:50:37.402243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.402341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.402360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:46.744 [2024-07-15 19:50:37.402372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:46.744 [2024-07-15 19:50:37.402388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.440750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.440804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:46.744 [2024-07-15 19:50:37.440820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.309 ms 00:28:46.744 [2024-07-15 19:50:37.440833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.480811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.480852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:46.744 [2024-07-15 19:50:37.480865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.932 ms 00:28:46.744 [2024-07-15 19:50:37.480878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.744 [2024-07-15 19:50:37.481639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.744 [2024-07-15 19:50:37.481663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:46.744 [2024-07-15 19:50:37.481674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:28:46.744 [2024-07-15 19:50:37.481691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.592247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.002 [2024-07-15 19:50:37.592318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:47.002 [2024-07-15 19:50:37.592336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 110.500 ms 00:28:47.002 [2024-07-15 19:50:37.592353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.633001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.002 [2024-07-15 19:50:37.633069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:47.002 [2024-07-15 19:50:37.633086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.599 ms 00:28:47.002 [2024-07-15 19:50:37.633099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.673791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.002 [2024-07-15 19:50:37.673858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:47.002 [2024-07-15 19:50:37.673885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.642 ms 00:28:47.002 [2024-07-15 19:50:37.673899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.715346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.002 [2024-07-15 19:50:37.715424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:47.002 [2024-07-15 19:50:37.715441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.395 ms 00:28:47.002 [2024-07-15 19:50:37.715454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.715512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.002 [2024-07-15 19:50:37.715527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:47.002 [2024-07-15 19:50:37.715538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:47.002 [2024-07-15 19:50:37.715554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.715651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.002 [2024-07-15 19:50:37.715667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:47.002 [2024-07-15 19:50:37.715682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:47.002 [2024-07-15 19:50:37.715694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.002 [2024-07-15 19:50:37.716766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3166.312 ms, result 0 00:28:47.002 { 00:28:47.002 "name": "ftl", 00:28:47.002 "uuid": "ccc74b4e-8a2a-4a39-901a-daebb00cb136" 00:28:47.002 } 00:28:47.002 19:50:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:47.260 [2024-07-15 19:50:38.004037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.260 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:47.519 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:47.779 [2024-07-15 19:50:38.368370] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:47.779 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:47.779 [2024-07-15 19:50:38.559294] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:48.037 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:48.295 Fill FTL, iteration 1 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86076 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86076 /var/tmp/spdk.tgt.sock 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86076 ']' 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:48.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:48.295 19:50:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:48.295 [2024-07-15 19:50:38.990370] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:48.295 [2024-07-15 19:50:38.990498] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86076 ] 00:28:48.553 [2024-07-15 19:50:39.152237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.811 [2024-07-15 19:50:39.387667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.747 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:49.747 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:49.747 19:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:50.006 ftln1 00:28:50.006 19:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:50.006 19:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86076 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86076 ']' 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86076 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86076 00:28:50.264 killing process with pid 86076 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86076' 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86076 00:28:50.264 19:50:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86076 00:28:52.797 19:50:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:52.797 19:50:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:52.797 [2024-07-15 19:50:43.576207] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:52.797 [2024-07-15 19:50:43.576393] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86130 ] 00:28:53.054 [2024-07-15 19:50:43.760744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.311 [2024-07-15 19:50:44.002547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.464  Copying: 240/1024 [MB] (240 MBps) Copying: 491/1024 [MB] (251 MBps) Copying: 732/1024 [MB] (241 MBps) Copying: 975/1024 [MB] (243 MBps) Copying: 1024/1024 [MB] (average 242 MBps) 00:28:59.464 00:28:59.464 Calculate MD5 checksum, iteration 1 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:59.464 19:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:59.722 [2024-07-15 19:50:50.265334] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:59.722 [2024-07-15 19:50:50.265460] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:28:59.722 [2024-07-15 19:50:50.431705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.980 [2024-07-15 19:50:50.678447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.385  Copying: 616/1024 [MB] (616 MBps) Copying: 1024/1024 [MB] (average 588 MBps) 00:29:03.385 00:29:03.385 19:50:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:03.385 19:50:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:05.287 Fill FTL, iteration 2 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b9362679809ccd032b7577603bf3c4f0 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:05.287 19:50:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:05.287 [2024-07-15 19:50:56.011952] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:05.288 [2024-07-15 19:50:56.012339] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86262 ] 00:29:05.545 [2024-07-15 19:50:56.188183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.802 [2024-07-15 19:50:56.415809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.083  Copying: 242/1024 [MB] (242 MBps) Copying: 463/1024 [MB] (221 MBps) Copying: 686/1024 [MB] (223 MBps) Copying: 912/1024 [MB] (226 MBps) Copying: 1024/1024 [MB] (average 227 MBps) 00:29:12.083 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:12.083 Calculate MD5 checksum, iteration 2 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:12.083 19:51:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:12.083 [2024-07-15 19:51:02.852227] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:12.083 [2024-07-15 19:51:02.852576] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86331 ] 00:29:12.341 [2024-07-15 19:51:03.017103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.599 [2024-07-15 19:51:03.251052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.967  Copying: 670/1024 [MB] (670 MBps) Copying: 1024/1024 [MB] (average 659 MBps) 00:29:16.967 00:29:16.967 19:51:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:16.967 19:51:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:18.871 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:18.871 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=61b08dabc6ea462e1aa1d6eccc178954 00:29:18.871 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:18.871 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:18.871 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:18.871 [2024-07-15 19:51:09.318309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.871 [2024-07-15 19:51:09.318375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:18.871 [2024-07-15 19:51:09.318393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:18.871 [2024-07-15 19:51:09.318406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.871 [2024-07-15 19:51:09.318438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.871 [2024-07-15 19:51:09.318450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:18.871 [2024-07-15 19:51:09.318462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:18.871 [2024-07-15 19:51:09.318481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.871 [2024-07-15 19:51:09.318504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.871 [2024-07-15 19:51:09.318516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:18.871 [2024-07-15 19:51:09.318538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:18.871 [2024-07-15 19:51:09.318548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.871 [2024-07-15 19:51:09.318626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.302 ms, result 0 00:29:18.871 true 00:29:18.872 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:18.872 { 00:29:18.872 "name": "ftl", 00:29:18.872 "properties": [ 00:29:18.872 { 00:29:18.872 "name": "superblock_version", 00:29:18.872 "value": 5, 00:29:18.872 "read-only": true 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "name": "base_device", 00:29:18.872 "bands": [ 00:29:18.872 { 00:29:18.872 "id": 0, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 1, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 2, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 3, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 4, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 5, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 6, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 7, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 8, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 9, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 10, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 11, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 12, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 13, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 14, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 15, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 16, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 17, 00:29:18.872 "state": "FREE", 00:29:18.872 "validity": 0.0 00:29:18.872 } 00:29:18.872 ], 00:29:18.872 "read-only": true 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "name": "cache_device", 00:29:18.872 "type": "bdev", 00:29:18.872 "chunks": [ 00:29:18.872 { 00:29:18.872 "id": 0, 00:29:18.872 "state": "INACTIVE", 00:29:18.872 "utilization": 0.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 1, 00:29:18.872 "state": "CLOSED", 00:29:18.872 "utilization": 1.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 2, 00:29:18.872 "state": "CLOSED", 00:29:18.872 "utilization": 1.0 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 3, 00:29:18.872 "state": "OPEN", 00:29:18.872 "utilization": 0.001953125 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "id": 4, 00:29:18.872 "state": "OPEN", 00:29:18.872 "utilization": 0.0 00:29:18.872 } 00:29:18.872 ], 00:29:18.872 "read-only": true 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "name": "verbose_mode", 00:29:18.872 "value": true, 00:29:18.872 "unit": "", 00:29:18.872 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:18.872 }, 00:29:18.872 { 00:29:18.872 "name": "prep_upgrade_on_shutdown", 00:29:18.872 "value": false, 00:29:18.872 "unit": "", 00:29:18.872 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:18.872 } 00:29:18.872 ] 00:29:18.872 } 00:29:18.872 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:19.131 [2024-07-15 19:51:09.854814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.131 [2024-07-15 19:51:09.854866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:19.131 [2024-07-15 19:51:09.854883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:19.131 [2024-07-15 19:51:09.854893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.131 [2024-07-15 19:51:09.854920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.131 [2024-07-15 19:51:09.854931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:19.131 [2024-07-15 19:51:09.854941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.131 [2024-07-15 19:51:09.854951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.131 [2024-07-15 19:51:09.854971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.131 [2024-07-15 19:51:09.854981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:19.131 [2024-07-15 19:51:09.854991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:19.131 [2024-07-15 19:51:09.855001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.131 [2024-07-15 19:51:09.855059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.251 ms, result 0 00:29:19.131 true 00:29:19.131 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:19.131 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:19.131 19:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.389 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:19.389 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:19.389 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:19.647 [2024-07-15 19:51:10.328575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.647 [2024-07-15 19:51:10.328631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:19.647 [2024-07-15 19:51:10.328647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:19.647 [2024-07-15 19:51:10.328659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.647 [2024-07-15 19:51:10.328689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.647 [2024-07-15 19:51:10.328701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:19.647 [2024-07-15 19:51:10.328712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.647 [2024-07-15 19:51:10.328723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.647 [2024-07-15 19:51:10.328746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.647 [2024-07-15 19:51:10.328758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:19.647 [2024-07-15 19:51:10.328769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:19.647 [2024-07-15 19:51:10.328791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.647 [2024-07-15 19:51:10.328855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.269 ms, result 0 00:29:19.647 true 00:29:19.647 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.907 { 00:29:19.907 "name": "ftl", 00:29:19.907 "properties": [ 00:29:19.907 { 00:29:19.907 "name": "superblock_version", 00:29:19.907 "value": 5, 00:29:19.907 "read-only": true 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "name": "base_device", 00:29:19.907 "bands": [ 00:29:19.907 { 00:29:19.907 "id": 0, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 1, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 2, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 3, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 4, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 5, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 6, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 7, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 8, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 9, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 10, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 11, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 12, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 13, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 14, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 15, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 16, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 17, 00:29:19.907 "state": "FREE", 00:29:19.907 "validity": 0.0 00:29:19.907 } 00:29:19.907 ], 00:29:19.907 "read-only": true 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "name": "cache_device", 00:29:19.907 "type": "bdev", 00:29:19.907 "chunks": [ 00:29:19.907 { 00:29:19.907 "id": 0, 00:29:19.907 "state": "INACTIVE", 00:29:19.907 "utilization": 0.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 1, 00:29:19.907 "state": "CLOSED", 00:29:19.907 "utilization": 1.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 2, 00:29:19.907 "state": "CLOSED", 00:29:19.907 "utilization": 1.0 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 3, 00:29:19.907 "state": "OPEN", 00:29:19.907 "utilization": 0.001953125 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "id": 4, 00:29:19.907 "state": "OPEN", 00:29:19.907 "utilization": 0.0 00:29:19.907 } 00:29:19.907 ], 00:29:19.907 "read-only": true 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "name": "verbose_mode", 00:29:19.907 "value": true, 00:29:19.907 "unit": "", 00:29:19.907 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:19.907 }, 00:29:19.907 { 00:29:19.907 "name": "prep_upgrade_on_shutdown", 00:29:19.907 "value": true, 00:29:19.907 "unit": "", 00:29:19.907 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:19.907 } 00:29:19.907 ] 00:29:19.907 } 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85954 ]] 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85954 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85954 ']' 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85954 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85954 00:29:19.907 killing process with pid 85954 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85954' 00:29:19.907 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85954 00:29:19.908 19:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85954 00:29:21.284 [2024-07-15 19:51:11.774493] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:21.284 [2024-07-15 19:51:11.793233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.284 [2024-07-15 19:51:11.793283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:21.284 [2024-07-15 19:51:11.793299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:21.284 [2024-07-15 19:51:11.793309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.284 [2024-07-15 19:51:11.793333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:21.284 [2024-07-15 19:51:11.797413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.284 [2024-07-15 19:51:11.797439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:21.284 [2024-07-15 19:51:11.797451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.064 ms 00:29:21.284 [2024-07-15 19:51:11.797462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.355267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.355336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:29.395 [2024-07-15 19:51:19.355354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7557.729 ms 00:29:29.395 [2024-07-15 19:51:19.355366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.356506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.356535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:29.395 [2024-07-15 19:51:19.356556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.121 ms 00:29:29.395 [2024-07-15 19:51:19.356577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.357661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.357678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:29.395 [2024-07-15 19:51:19.357690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:29:29.395 [2024-07-15 19:51:19.357699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.374890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.374952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:29.395 [2024-07-15 19:51:19.374969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.147 ms 00:29:29.395 [2024-07-15 19:51:19.374981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.386020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.386076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:29.395 [2024-07-15 19:51:19.386108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.993 ms 00:29:29.395 [2024-07-15 19:51:19.386120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.386237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.386252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:29.395 [2024-07-15 19:51:19.386265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:29:29.395 [2024-07-15 19:51:19.386276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.404230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.404271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:29.395 [2024-07-15 19:51:19.404301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.935 ms 00:29:29.395 [2024-07-15 19:51:19.404311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.422156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.422225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:29.395 [2024-07-15 19:51:19.422240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.803 ms 00:29:29.395 [2024-07-15 19:51:19.422250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.438895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.438934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:29.395 [2024-07-15 19:51:19.438947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.601 ms 00:29:29.395 [2024-07-15 19:51:19.438957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.455388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.395 [2024-07-15 19:51:19.455430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:29.395 [2024-07-15 19:51:19.455444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.343 ms 00:29:29.395 [2024-07-15 19:51:19.455455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.395 [2024-07-15 19:51:19.455493] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:29.395 [2024-07-15 19:51:19.455512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:29.395 [2024-07-15 19:51:19.455536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:29.395 [2024-07-15 19:51:19.455549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:29.395 [2024-07-15 19:51:19.455562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:29.395 [2024-07-15 19:51:19.455644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:29.396 [2024-07-15 19:51:19.455754] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:29.396 [2024-07-15 19:51:19.455765] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ccc74b4e-8a2a-4a39-901a-daebb00cb136 00:29:29.396 [2024-07-15 19:51:19.455788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:29.396 [2024-07-15 19:51:19.455799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:29.396 [2024-07-15 19:51:19.455810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:29.396 [2024-07-15 19:51:19.455838] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:29.396 [2024-07-15 19:51:19.455853] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:29.396 [2024-07-15 19:51:19.455866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:29.396 [2024-07-15 19:51:19.455878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:29.396 [2024-07-15 19:51:19.455888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:29.396 [2024-07-15 19:51:19.455899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:29.396 [2024-07-15 19:51:19.455911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.396 [2024-07-15 19:51:19.455923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:29.396 [2024-07-15 19:51:19.455935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:29:29.396 [2024-07-15 19:51:19.455951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.479606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.396 [2024-07-15 19:51:19.479649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:29.396 [2024-07-15 19:51:19.479665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.629 ms 00:29:29.396 [2024-07-15 19:51:19.479677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.480321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.396 [2024-07-15 19:51:19.480338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:29.396 [2024-07-15 19:51:19.480356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.591 ms 00:29:29.396 [2024-07-15 19:51:19.480367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.551042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.551100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:29.396 [2024-07-15 19:51:19.551116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.551126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.551180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.551191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:29.396 [2024-07-15 19:51:19.551207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.551217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.551313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.551327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:29.396 [2024-07-15 19:51:19.551338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.551348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.551367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.551384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:29.396 [2024-07-15 19:51:19.551394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.551408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.679795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.679850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:29.396 [2024-07-15 19:51:19.679865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.679876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:29.396 [2024-07-15 19:51:19.792334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:29.396 [2024-07-15 19:51:19.792472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:29.396 [2024-07-15 19:51:19.792548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:29.396 [2024-07-15 19:51:19.792695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:29.396 [2024-07-15 19:51:19.792761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:29.396 [2024-07-15 19:51:19.792878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.792938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:29.396 [2024-07-15 19:51:19.792951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:29.396 [2024-07-15 19:51:19.792961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:29.396 [2024-07-15 19:51:19.792972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.396 [2024-07-15 19:51:19.793098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7999.794 ms, result 0 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86540 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86540 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86540 ']' 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.596 19:51:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.596 [2024-07-15 19:51:23.774126] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:33.596 [2024-07-15 19:51:23.774595] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86540 ] 00:29:33.596 [2024-07-15 19:51:23.954975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.596 [2024-07-15 19:51:24.217475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.528 [2024-07-15 19:51:25.265666] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:34.528 [2024-07-15 19:51:25.265888] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:34.787 [2024-07-15 19:51:25.414511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.414571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:34.787 [2024-07-15 19:51:25.414591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:34.787 [2024-07-15 19:51:25.414613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.414691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.414704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:34.787 [2024-07-15 19:51:25.414717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:34.787 [2024-07-15 19:51:25.414728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.414754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:34.787 [2024-07-15 19:51:25.416009] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:34.787 [2024-07-15 19:51:25.416044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.416056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:34.787 [2024-07-15 19:51:25.416067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:29:34.787 [2024-07-15 19:51:25.416077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.417572] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:34.787 [2024-07-15 19:51:25.438538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.438584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:34.787 [2024-07-15 19:51:25.438607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.966 ms 00:29:34.787 [2024-07-15 19:51:25.438620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.438706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.438721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:34.787 [2024-07-15 19:51:25.438733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:34.787 [2024-07-15 19:51:25.438745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.446003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.446056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:34.787 [2024-07-15 19:51:25.446071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.146 ms 00:29:34.787 [2024-07-15 19:51:25.446082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.446157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.446174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:34.787 [2024-07-15 19:51:25.446186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:34.787 [2024-07-15 19:51:25.446201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.446256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.446269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:34.787 [2024-07-15 19:51:25.446281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:34.787 [2024-07-15 19:51:25.446291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.446322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:34.787 [2024-07-15 19:51:25.452290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.452322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:34.787 [2024-07-15 19:51:25.452334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.974 ms 00:29:34.787 [2024-07-15 19:51:25.452344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.452376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.452387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:34.787 [2024-07-15 19:51:25.452398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:34.787 [2024-07-15 19:51:25.452411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.452467] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:34.787 [2024-07-15 19:51:25.452491] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:34.787 [2024-07-15 19:51:25.452527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:34.787 [2024-07-15 19:51:25.452544] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:34.787 [2024-07-15 19:51:25.452628] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:34.787 [2024-07-15 19:51:25.452641] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:34.787 [2024-07-15 19:51:25.452658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:34.787 [2024-07-15 19:51:25.452671] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:34.787 [2024-07-15 19:51:25.452683] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:34.787 [2024-07-15 19:51:25.452695] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:34.787 [2024-07-15 19:51:25.452705] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:34.787 [2024-07-15 19:51:25.452715] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:34.787 [2024-07-15 19:51:25.452726] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:34.787 [2024-07-15 19:51:25.452736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.452746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:34.787 [2024-07-15 19:51:25.452756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:29:34.787 [2024-07-15 19:51:25.452766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.452857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.787 [2024-07-15 19:51:25.452870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:34.787 [2024-07-15 19:51:25.452880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:34.787 [2024-07-15 19:51:25.452895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.787 [2024-07-15 19:51:25.452985] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:34.787 [2024-07-15 19:51:25.452999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:34.787 [2024-07-15 19:51:25.453016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:34.787 [2024-07-15 19:51:25.453027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.787 [2024-07-15 19:51:25.453037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:34.787 [2024-07-15 19:51:25.453046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:34.787 [2024-07-15 19:51:25.453056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:34.787 [2024-07-15 19:51:25.453065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:34.787 [2024-07-15 19:51:25.453075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:34.787 [2024-07-15 19:51:25.453084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.787 [2024-07-15 19:51:25.453094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:34.787 [2024-07-15 19:51:25.453104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:34.787 [2024-07-15 19:51:25.453113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:34.788 [2024-07-15 19:51:25.453132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:34.788 [2024-07-15 19:51:25.453141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:34.788 [2024-07-15 19:51:25.453160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:34.788 [2024-07-15 19:51:25.453168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:34.788 [2024-07-15 19:51:25.453187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:34.788 [2024-07-15 19:51:25.453196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:34.788 [2024-07-15 19:51:25.453214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:34.788 [2024-07-15 19:51:25.453223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:34.788 [2024-07-15 19:51:25.453240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:34.788 [2024-07-15 19:51:25.453249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:34.788 [2024-07-15 19:51:25.453268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:34.788 [2024-07-15 19:51:25.453276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:34.788 [2024-07-15 19:51:25.453294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:34.788 [2024-07-15 19:51:25.453303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:34.788 [2024-07-15 19:51:25.453321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:34.788 [2024-07-15 19:51:25.453348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:34.788 [2024-07-15 19:51:25.453376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:34.788 [2024-07-15 19:51:25.453386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453395] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:34.788 [2024-07-15 19:51:25.453409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:34.788 [2024-07-15 19:51:25.453419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.788 [2024-07-15 19:51:25.453438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:34.788 [2024-07-15 19:51:25.453448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:34.788 [2024-07-15 19:51:25.453457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:34.788 [2024-07-15 19:51:25.453466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:34.788 [2024-07-15 19:51:25.453487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:34.788 [2024-07-15 19:51:25.453496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:34.788 [2024-07-15 19:51:25.453506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:34.788 [2024-07-15 19:51:25.453518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:34.788 [2024-07-15 19:51:25.453540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:34.788 [2024-07-15 19:51:25.453570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:34.788 [2024-07-15 19:51:25.453580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:34.788 [2024-07-15 19:51:25.453591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:34.788 [2024-07-15 19:51:25.453601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:34.788 [2024-07-15 19:51:25.453672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:34.788 [2024-07-15 19:51:25.453683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:34.788 [2024-07-15 19:51:25.453704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:34.788 [2024-07-15 19:51:25.453714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:34.788 [2024-07-15 19:51:25.453725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:34.788 [2024-07-15 19:51:25.453736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.788 [2024-07-15 19:51:25.453746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:34.788 [2024-07-15 19:51:25.453756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.802 ms 00:29:34.788 [2024-07-15 19:51:25.453771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.788 [2024-07-15 19:51:25.453830] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:34.788 [2024-07-15 19:51:25.453842] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:37.395 [2024-07-15 19:51:27.941758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.395 [2024-07-15 19:51:27.941827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:37.395 [2024-07-15 19:51:27.941845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2487.912 ms 00:29:37.395 [2024-07-15 19:51:27.941857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.395 [2024-07-15 19:51:27.987631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.395 [2024-07-15 19:51:27.987687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:37.395 [2024-07-15 19:51:27.987704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.341 ms 00:29:37.395 [2024-07-15 19:51:27.987719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.395 [2024-07-15 19:51:27.987858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.395 [2024-07-15 19:51:27.987875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:37.395 [2024-07-15 19:51:27.987886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:37.395 [2024-07-15 19:51:27.987911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.395 [2024-07-15 19:51:28.041535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.395 [2024-07-15 19:51:28.041591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:37.395 [2024-07-15 19:51:28.041606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.576 ms 00:29:37.395 [2024-07-15 19:51:28.041616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.395 [2024-07-15 19:51:28.041679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.395 [2024-07-15 19:51:28.041691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:37.395 [2024-07-15 19:51:28.041701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:37.395 [2024-07-15 19:51:28.041715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.395 [2024-07-15 19:51:28.042211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.395 [2024-07-15 19:51:28.042226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:37.395 [2024-07-15 19:51:28.042242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:29:37.395 [2024-07-15 19:51:28.042252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.042299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.042316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:37.396 [2024-07-15 19:51:28.042326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:37.396 [2024-07-15 19:51:28.042336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.064757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.064810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:37.396 [2024-07-15 19:51:28.064826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.397 ms 00:29:37.396 [2024-07-15 19:51:28.064837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.085441] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:37.396 [2024-07-15 19:51:28.085491] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:37.396 [2024-07-15 19:51:28.085508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.085520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:37.396 [2024-07-15 19:51:28.085533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.533 ms 00:29:37.396 [2024-07-15 19:51:28.085543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.107802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.107854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:37.396 [2024-07-15 19:51:28.107869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.205 ms 00:29:37.396 [2024-07-15 19:51:28.107897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.129634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.129681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:37.396 [2024-07-15 19:51:28.129695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.676 ms 00:29:37.396 [2024-07-15 19:51:28.129706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.150593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.150661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:37.396 [2024-07-15 19:51:28.150682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.837 ms 00:29:37.396 [2024-07-15 19:51:28.150699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.396 [2024-07-15 19:51:28.151711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.396 [2024-07-15 19:51:28.151749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:37.396 [2024-07-15 19:51:28.151763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.859 ms 00:29:37.396 [2024-07-15 19:51:28.151791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.261924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.261994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:37.654 [2024-07-15 19:51:28.262013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 110.075 ms 00:29:37.654 [2024-07-15 19:51:28.262025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.275276] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:37.654 [2024-07-15 19:51:28.276378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.276407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:37.654 [2024-07-15 19:51:28.276421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.242 ms 00:29:37.654 [2024-07-15 19:51:28.276437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.276543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.276557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:37.654 [2024-07-15 19:51:28.276568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:37.654 [2024-07-15 19:51:28.276579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.276640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.276653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:37.654 [2024-07-15 19:51:28.276664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:37.654 [2024-07-15 19:51:28.276674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.276700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.276711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:37.654 [2024-07-15 19:51:28.276721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:37.654 [2024-07-15 19:51:28.276731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.276766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:37.654 [2024-07-15 19:51:28.276789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.276800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:37.654 [2024-07-15 19:51:28.276811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:37.654 [2024-07-15 19:51:28.276821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.318587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.318642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:37.654 [2024-07-15 19:51:28.318658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.742 ms 00:29:37.654 [2024-07-15 19:51:28.318687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.318793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.654 [2024-07-15 19:51:28.318808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:37.654 [2024-07-15 19:51:28.318820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:37.654 [2024-07-15 19:51:28.318831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.654 [2024-07-15 19:51:28.320159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2905.085 ms, result 0 00:29:37.654 [2024-07-15 19:51:28.335019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.654 [2024-07-15 19:51:28.351014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:37.654 [2024-07-15 19:51:28.361356] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:37.654 19:51:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.655 19:51:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:37.655 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:37.655 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:37.655 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:37.913 [2024-07-15 19:51:28.653535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.913 [2024-07-15 19:51:28.653593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:37.913 [2024-07-15 19:51:28.653609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:37.913 [2024-07-15 19:51:28.653638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.913 [2024-07-15 19:51:28.653670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.913 [2024-07-15 19:51:28.653686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:37.913 [2024-07-15 19:51:28.653698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:37.913 [2024-07-15 19:51:28.653709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.913 [2024-07-15 19:51:28.653732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.913 [2024-07-15 19:51:28.653744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:37.913 [2024-07-15 19:51:28.653756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:37.913 [2024-07-15 19:51:28.653767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.913 [2024-07-15 19:51:28.653854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.286 ms, result 0 00:29:37.913 true 00:29:37.913 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:38.172 { 00:29:38.172 "name": "ftl", 00:29:38.172 "properties": [ 00:29:38.172 { 00:29:38.172 "name": "superblock_version", 00:29:38.172 "value": 5, 00:29:38.172 "read-only": true 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "name": "base_device", 00:29:38.172 "bands": [ 00:29:38.172 { 00:29:38.172 "id": 0, 00:29:38.172 "state": "CLOSED", 00:29:38.172 "validity": 1.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 1, 00:29:38.172 "state": "CLOSED", 00:29:38.172 "validity": 1.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 2, 00:29:38.172 "state": "CLOSED", 00:29:38.172 "validity": 0.007843137254901933 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 3, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 4, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 5, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 6, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 7, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 8, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 9, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 10, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 11, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 12, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 13, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 14, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 15, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 16, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 17, 00:29:38.172 "state": "FREE", 00:29:38.172 "validity": 0.0 00:29:38.172 } 00:29:38.172 ], 00:29:38.172 "read-only": true 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "name": "cache_device", 00:29:38.172 "type": "bdev", 00:29:38.172 "chunks": [ 00:29:38.172 { 00:29:38.172 "id": 0, 00:29:38.172 "state": "INACTIVE", 00:29:38.172 "utilization": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 1, 00:29:38.172 "state": "OPEN", 00:29:38.172 "utilization": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 2, 00:29:38.172 "state": "OPEN", 00:29:38.172 "utilization": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 3, 00:29:38.172 "state": "FREE", 00:29:38.172 "utilization": 0.0 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "id": 4, 00:29:38.172 "state": "FREE", 00:29:38.172 "utilization": 0.0 00:29:38.172 } 00:29:38.172 ], 00:29:38.172 "read-only": true 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "name": "verbose_mode", 00:29:38.172 "value": true, 00:29:38.172 "unit": "", 00:29:38.172 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:38.172 }, 00:29:38.172 { 00:29:38.172 "name": "prep_upgrade_on_shutdown", 00:29:38.172 "value": false, 00:29:38.172 "unit": "", 00:29:38.172 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:38.172 } 00:29:38.172 ] 00:29:38.172 } 00:29:38.172 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:38.172 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:38.172 19:51:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:38.430 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:38.430 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:38.430 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:38.430 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:38.430 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:38.689 Validate MD5 checksum, iteration 1 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:38.689 19:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:38.689 [2024-07-15 19:51:29.437152] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:38.689 [2024-07-15 19:51:29.437979] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86610 ] 00:29:38.948 [2024-07-15 19:51:29.618389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.206 [2024-07-15 19:51:29.860016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.962  Copying: 500/1024 [MB] (500 MBps) Copying: 1023/1024 [MB] (523 MBps) Copying: 1024/1024 [MB] (average 511 MBps) 00:29:43.962 00:29:43.962 19:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:43.962 19:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:46.490 Validate MD5 checksum, iteration 2 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9362679809ccd032b7577603bf3c4f0 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9362679809ccd032b7577603bf3c4f0 != \b\9\3\6\2\6\7\9\8\0\9\c\c\d\0\3\2\b\7\5\7\7\6\0\3\b\f\3\c\4\f\0 ]] 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:46.490 19:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:46.490 [2024-07-15 19:51:36.902973] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:46.490 [2024-07-15 19:51:36.903122] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86698 ] 00:29:46.490 [2024-07-15 19:51:37.072894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.748 [2024-07-15 19:51:37.381683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.497  Copying: 593/1024 [MB] (593 MBps) Copying: 1024/1024 [MB] (average 595 MBps) 00:29:52.497 00:29:52.497 19:51:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:52.497 19:51:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=61b08dabc6ea462e1aa1d6eccc178954 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 61b08dabc6ea462e1aa1d6eccc178954 != \6\1\b\0\8\d\a\b\c\6\e\a\4\6\2\e\1\a\a\1\d\6\e\c\c\c\1\7\8\9\5\4 ]] 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86540 ]] 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86540 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86782 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86782 00:29:55.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86782 ']' 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:55.025 19:51:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.026 [2024-07-15 19:51:45.317129] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:55.026 [2024-07-15 19:51:45.317289] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86782 ] 00:29:55.026 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86540 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:55.026 [2024-07-15 19:51:45.485338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.026 [2024-07-15 19:51:45.773092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.395 [2024-07-15 19:51:46.914964] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.395 [2024-07-15 19:51:46.915037] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.395 [2024-07-15 19:51:47.065860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.065930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:56.395 [2024-07-15 19:51:47.065951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:56.395 [2024-07-15 19:51:47.065964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.066039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.066053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.395 [2024-07-15 19:51:47.066066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:29:56.395 [2024-07-15 19:51:47.066077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.066105] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:56.395 [2024-07-15 19:51:47.067402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:56.395 [2024-07-15 19:51:47.067445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.067458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.395 [2024-07-15 19:51:47.067471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.345 ms 00:29:56.395 [2024-07-15 19:51:47.067483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.067985] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:56.395 [2024-07-15 19:51:47.099571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.099673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:56.395 [2024-07-15 19:51:47.099694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.582 ms 00:29:56.395 [2024-07-15 19:51:47.099729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.119041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.119120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:56.395 [2024-07-15 19:51:47.119138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:56.395 [2024-07-15 19:51:47.119150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.119795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.119825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.395 [2024-07-15 19:51:47.119845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:29:56.395 [2024-07-15 19:51:47.119857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.119931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.119947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.395 [2024-07-15 19:51:47.119960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:56.395 [2024-07-15 19:51:47.119971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.120009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.120023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:56.395 [2024-07-15 19:51:47.120034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:56.395 [2024-07-15 19:51:47.120049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.120082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:56.395 [2024-07-15 19:51:47.126790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.126853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.395 [2024-07-15 19:51:47.126869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.716 ms 00:29:56.395 [2024-07-15 19:51:47.126881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.126928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.126941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:56.395 [2024-07-15 19:51:47.126955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:56.395 [2024-07-15 19:51:47.126967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.127026] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:56.395 [2024-07-15 19:51:47.127056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:56.395 [2024-07-15 19:51:47.127102] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:56.395 [2024-07-15 19:51:47.127124] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:56.395 [2024-07-15 19:51:47.127225] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:56.395 [2024-07-15 19:51:47.127240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:56.395 [2024-07-15 19:51:47.127256] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:56.395 [2024-07-15 19:51:47.127271] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:56.395 [2024-07-15 19:51:47.127285] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:56.395 [2024-07-15 19:51:47.127298] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:56.395 [2024-07-15 19:51:47.127310] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:56.395 [2024-07-15 19:51:47.127326] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:56.395 [2024-07-15 19:51:47.127337] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:56.395 [2024-07-15 19:51:47.127350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.127362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:56.395 [2024-07-15 19:51:47.127379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.327 ms 00:29:56.395 [2024-07-15 19:51:47.127391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.127482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.395 [2024-07-15 19:51:47.127495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:56.395 [2024-07-15 19:51:47.127508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:56.395 [2024-07-15 19:51:47.127519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.395 [2024-07-15 19:51:47.127630] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:56.395 [2024-07-15 19:51:47.127644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:56.395 [2024-07-15 19:51:47.127657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.395 [2024-07-15 19:51:47.127669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.395 [2024-07-15 19:51:47.127681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:56.395 [2024-07-15 19:51:47.127694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:56.395 [2024-07-15 19:51:47.127705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:56.395 [2024-07-15 19:51:47.127716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:56.395 [2024-07-15 19:51:47.127728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:56.395 [2024-07-15 19:51:47.127739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.395 [2024-07-15 19:51:47.127751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:56.395 [2024-07-15 19:51:47.127762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:56.395 [2024-07-15 19:51:47.127773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.395 [2024-07-15 19:51:47.127803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:56.395 [2024-07-15 19:51:47.127815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:56.395 [2024-07-15 19:51:47.127826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.395 [2024-07-15 19:51:47.127837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:56.395 [2024-07-15 19:51:47.127848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:56.395 [2024-07-15 19:51:47.127859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.395 [2024-07-15 19:51:47.127872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:56.395 [2024-07-15 19:51:47.127884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:56.395 [2024-07-15 19:51:47.127900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.395 [2024-07-15 19:51:47.127911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:56.395 [2024-07-15 19:51:47.127922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:56.395 [2024-07-15 19:51:47.127933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.395 [2024-07-15 19:51:47.127944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:56.395 [2024-07-15 19:51:47.127955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:56.395 [2024-07-15 19:51:47.127965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.395 [2024-07-15 19:51:47.127977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:56.395 [2024-07-15 19:51:47.127988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:56.396 [2024-07-15 19:51:47.127998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.396 [2024-07-15 19:51:47.128009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:56.396 [2024-07-15 19:51:47.128020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:56.396 [2024-07-15 19:51:47.128031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.396 [2024-07-15 19:51:47.128042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:56.396 [2024-07-15 19:51:47.128053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:56.396 [2024-07-15 19:51:47.128063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.396 [2024-07-15 19:51:47.128074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:56.396 [2024-07-15 19:51:47.128085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:56.396 [2024-07-15 19:51:47.128096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.396 [2024-07-15 19:51:47.128106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:56.396 [2024-07-15 19:51:47.128117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:56.396 [2024-07-15 19:51:47.128128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.396 [2024-07-15 19:51:47.128138] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:56.396 [2024-07-15 19:51:47.128151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:56.396 [2024-07-15 19:51:47.128163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.396 [2024-07-15 19:51:47.128174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.396 [2024-07-15 19:51:47.128186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:56.396 [2024-07-15 19:51:47.128198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:56.396 [2024-07-15 19:51:47.128227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:56.396 [2024-07-15 19:51:47.128239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:56.396 [2024-07-15 19:51:47.128251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:56.396 [2024-07-15 19:51:47.128263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:56.396 [2024-07-15 19:51:47.128275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:56.396 [2024-07-15 19:51:47.128295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:56.396 [2024-07-15 19:51:47.128321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:56.396 [2024-07-15 19:51:47.128358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:56.396 [2024-07-15 19:51:47.128371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:56.396 [2024-07-15 19:51:47.128383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:56.396 [2024-07-15 19:51:47.128395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:56.396 [2024-07-15 19:51:47.128483] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:56.396 [2024-07-15 19:51:47.128496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.396 [2024-07-15 19:51:47.128521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:56.396 [2024-07-15 19:51:47.128533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:56.396 [2024-07-15 19:51:47.128545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:56.396 [2024-07-15 19:51:47.128558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.396 [2024-07-15 19:51:47.128570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:56.396 [2024-07-15 19:51:47.128582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:29:56.396 [2024-07-15 19:51:47.128593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.396 [2024-07-15 19:51:47.178911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.396 [2024-07-15 19:51:47.178977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:56.396 [2024-07-15 19:51:47.178996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.245 ms 00:29:56.396 [2024-07-15 19:51:47.179009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.396 [2024-07-15 19:51:47.179077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.396 [2024-07-15 19:51:47.179091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:56.396 [2024-07-15 19:51:47.179104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:56.396 [2024-07-15 19:51:47.179122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.237433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.237498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:56.705 [2024-07-15 19:51:47.237515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 58.220 ms 00:29:56.705 [2024-07-15 19:51:47.237527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.237599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.237617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:56.705 [2024-07-15 19:51:47.237630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:56.705 [2024-07-15 19:51:47.237641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.237812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.237829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:56.705 [2024-07-15 19:51:47.237842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:29:56.705 [2024-07-15 19:51:47.237854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.237903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.237917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:56.705 [2024-07-15 19:51:47.237932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:56.705 [2024-07-15 19:51:47.237944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.264409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.264476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:56.705 [2024-07-15 19:51:47.264505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.435 ms 00:29:56.705 [2024-07-15 19:51:47.264517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.264683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.264699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:56.705 [2024-07-15 19:51:47.264712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:56.705 [2024-07-15 19:51:47.264724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.304943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.305040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:56.705 [2024-07-15 19:51:47.305061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.184 ms 00:29:56.705 [2024-07-15 19:51:47.305074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.323172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.323245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:56.705 [2024-07-15 19:51:47.323263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:29:56.705 [2024-07-15 19:51:47.323275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.427553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.427628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:56.705 [2024-07-15 19:51:47.427647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 104.164 ms 00:29:56.705 [2024-07-15 19:51:47.427660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.427922] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:56.705 [2024-07-15 19:51:47.428076] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:56.705 [2024-07-15 19:51:47.428210] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:56.705 [2024-07-15 19:51:47.428348] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:56.705 [2024-07-15 19:51:47.428383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.705 [2024-07-15 19:51:47.428396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:56.705 [2024-07-15 19:51:47.428409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:29:56.705 [2024-07-15 19:51:47.428421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.705 [2024-07-15 19:51:47.428522] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:56.705 [2024-07-15 19:51:47.428538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.706 [2024-07-15 19:51:47.428550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:56.706 [2024-07-15 19:51:47.428563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:56.706 [2024-07-15 19:51:47.428574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.706 [2024-07-15 19:51:47.459414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.706 [2024-07-15 19:51:47.459497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:56.706 [2024-07-15 19:51:47.459516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.799 ms 00:29:56.706 [2024-07-15 19:51:47.459535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.706 [2024-07-15 19:51:47.477038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.706 [2024-07-15 19:51:47.477118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:56.706 [2024-07-15 19:51:47.477136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:56.706 [2024-07-15 19:51:47.477149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.706 [2024-07-15 19:51:47.477445] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:57.269 [2024-07-15 19:51:47.950650] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:57.269 [2024-07-15 19:51:47.950847] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:57.895 [2024-07-15 19:51:48.402356] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:57.895 [2024-07-15 19:51:48.402461] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:57.895 [2024-07-15 19:51:48.402489] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:57.895 [2024-07-15 19:51:48.402505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.402518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:57.895 [2024-07-15 19:51:48.402534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 925.232 ms 00:29:57.895 [2024-07-15 19:51:48.402545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.402588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.402600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:57.895 [2024-07-15 19:51:48.402613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:57.895 [2024-07-15 19:51:48.402635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.418940] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:57.895 [2024-07-15 19:51:48.419177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.419202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:57.895 [2024-07-15 19:51:48.419217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.520 ms 00:29:57.895 [2024-07-15 19:51:48.419229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.419933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.419953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:57.895 [2024-07-15 19:51:48.419965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:29:57.895 [2024-07-15 19:51:48.419975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.422389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.422425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:57.895 [2024-07-15 19:51:48.422440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.377 ms 00:29:57.895 [2024-07-15 19:51:48.422452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.422506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.422520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:57.895 [2024-07-15 19:51:48.422532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:57.895 [2024-07-15 19:51:48.422544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.422673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.422688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:57.895 [2024-07-15 19:51:48.422704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:57.895 [2024-07-15 19:51:48.422715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.422742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.422754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:57.895 [2024-07-15 19:51:48.422771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:57.895 [2024-07-15 19:51:48.422792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.422831] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:57.895 [2024-07-15 19:51:48.422845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.422856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:57.895 [2024-07-15 19:51:48.422868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:57.895 [2024-07-15 19:51:48.422883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.422939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.895 [2024-07-15 19:51:48.422952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:57.895 [2024-07-15 19:51:48.422963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:57.895 [2024-07-15 19:51:48.422974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.895 [2024-07-15 19:51:48.424173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1357.790 ms, result 0 00:29:57.895 [2024-07-15 19:51:48.439847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.895 [2024-07-15 19:51:48.455851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:57.895 [2024-07-15 19:51:48.466542] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:57.895 Validate MD5 checksum, iteration 1 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:57.895 19:51:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:57.895 [2024-07-15 19:51:48.614687] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:29:57.895 [2024-07-15 19:51:48.615938] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86822 ] 00:29:58.152 [2024-07-15 19:51:48.795139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.409 [2024-07-15 19:51:49.051196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.223  Copying: 623/1024 [MB] (623 MBps) Copying: 1024/1024 [MB] (average 615 MBps) 00:30:04.223 00:30:04.223 19:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:04.223 19:51:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.126 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:06.126 Validate MD5 checksum, iteration 2 00:30:06.126 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9362679809ccd032b7577603bf3c4f0 00:30:06.126 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9362679809ccd032b7577603bf3c4f0 != \b\9\3\6\2\6\7\9\8\0\9\c\c\d\0\3\2\b\7\5\7\7\6\0\3\b\f\3\c\4\f\0 ]] 00:30:06.126 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:06.126 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:06.126 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:06.127 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:06.127 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:06.127 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:06.127 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:06.127 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:06.127 19:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:06.127 [2024-07-15 19:51:56.811754] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:30:06.127 [2024-07-15 19:51:56.811979] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86906 ] 00:30:06.385 [2024-07-15 19:51:56.983684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.643 [2024-07-15 19:51:57.248302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.003  Copying: 548/1024 [MB] (548 MBps) Copying: 1024/1024 [MB] (average 560 MBps) 00:30:11.003 00:30:11.003 19:52:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:11.003 19:52:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=61b08dabc6ea462e1aa1d6eccc178954 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 61b08dabc6ea462e1aa1d6eccc178954 != \6\1\b\0\8\d\a\b\c\6\e\a\4\6\2\e\1\a\a\1\d\6\e\c\c\c\1\7\8\9\5\4 ]] 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86782 ]] 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86782 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86782 ']' 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86782 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86782 00:30:13.530 killing process with pid 86782 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86782' 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86782 00:30:13.530 19:52:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86782 00:30:14.907 [2024-07-15 19:52:05.286303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:14.907 [2024-07-15 19:52:05.309327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.309398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:14.907 [2024-07-15 19:52:05.309417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.907 [2024-07-15 19:52:05.309430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.309458] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:14.907 [2024-07-15 19:52:05.314129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.314182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:14.907 [2024-07-15 19:52:05.314199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.649 ms 00:30:14.907 [2024-07-15 19:52:05.314212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.314501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.314520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:14.907 [2024-07-15 19:52:05.314543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:30:14.907 [2024-07-15 19:52:05.314555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.315717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.315759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:14.907 [2024-07-15 19:52:05.315775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:30:14.907 [2024-07-15 19:52:05.315799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.316968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.317003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:14.907 [2024-07-15 19:52:05.317018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 00:30:14.907 [2024-07-15 19:52:05.317038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.335858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.335942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:14.907 [2024-07-15 19:52:05.335962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.730 ms 00:30:14.907 [2024-07-15 19:52:05.335974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.345715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.345814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:14.907 [2024-07-15 19:52:05.345846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.662 ms 00:30:14.907 [2024-07-15 19:52:05.345858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.346001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.346017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:14.907 [2024-07-15 19:52:05.346030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:30:14.907 [2024-07-15 19:52:05.346042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.365372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.365452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:30:14.907 [2024-07-15 19:52:05.365469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.301 ms 00:30:14.907 [2024-07-15 19:52:05.365481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.385231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.385309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:30:14.907 [2024-07-15 19:52:05.385326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.671 ms 00:30:14.907 [2024-07-15 19:52:05.385338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.403870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.403948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:14.907 [2024-07-15 19:52:05.403965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.468 ms 00:30:14.907 [2024-07-15 19:52:05.403978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.423786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.423852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:14.907 [2024-07-15 19:52:05.423870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.669 ms 00:30:14.907 [2024-07-15 19:52:05.423882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.423932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:14.907 [2024-07-15 19:52:05.423952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:14.907 [2024-07-15 19:52:05.423968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:14.907 [2024-07-15 19:52:05.423981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:14.907 [2024-07-15 19:52:05.423994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:14.907 [2024-07-15 19:52:05.424187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:14.907 [2024-07-15 19:52:05.424219] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ccc74b4e-8a2a-4a39-901a-daebb00cb136 00:30:14.907 [2024-07-15 19:52:05.424236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:14.907 [2024-07-15 19:52:05.424253] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:14.907 [2024-07-15 19:52:05.424264] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:14.907 [2024-07-15 19:52:05.424276] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:14.907 [2024-07-15 19:52:05.424288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:14.907 [2024-07-15 19:52:05.424300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:14.907 [2024-07-15 19:52:05.424311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:14.907 [2024-07-15 19:52:05.424322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:14.907 [2024-07-15 19:52:05.424333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:14.907 [2024-07-15 19:52:05.424344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.424356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:14.907 [2024-07-15 19:52:05.424370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:30:14.907 [2024-07-15 19:52:05.424382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.448351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.448415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:14.907 [2024-07-15 19:52:05.448432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.921 ms 00:30:14.907 [2024-07-15 19:52:05.448461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.449076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.907 [2024-07-15 19:52:05.449091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:14.907 [2024-07-15 19:52:05.449103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:30:14.907 [2024-07-15 19:52:05.449115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.522520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:14.907 [2024-07-15 19:52:05.522588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:14.907 [2024-07-15 19:52:05.522605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:14.907 [2024-07-15 19:52:05.522617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.522682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:14.907 [2024-07-15 19:52:05.522697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:14.907 [2024-07-15 19:52:05.522709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:14.907 [2024-07-15 19:52:05.522721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.522857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:14.907 [2024-07-15 19:52:05.522874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:14.907 [2024-07-15 19:52:05.522886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:14.907 [2024-07-15 19:52:05.522897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.522919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:14.907 [2024-07-15 19:52:05.522932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:14.907 [2024-07-15 19:52:05.522944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:14.907 [2024-07-15 19:52:05.522955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.907 [2024-07-15 19:52:05.663277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:14.907 [2024-07-15 19:52:05.663350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:14.907 [2024-07-15 19:52:05.663368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:14.907 [2024-07-15 19:52:05.663380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.780875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.780952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:15.166 [2024-07-15 19:52:05.780969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.780981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.781104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:15.166 [2024-07-15 19:52:05.781116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.781127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.781206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:15.166 [2024-07-15 19:52:05.781218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.781229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.781370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:15.166 [2024-07-15 19:52:05.781382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.781393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.781457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:15.166 [2024-07-15 19:52:05.781469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.781481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.781541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:15.166 [2024-07-15 19:52:05.781552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.781564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:15.166 [2024-07-15 19:52:05.781624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:15.166 [2024-07-15 19:52:05.781636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:15.166 [2024-07-15 19:52:05.781648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.166 [2024-07-15 19:52:05.781783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 472.422 ms, result 0 00:30:16.539 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:16.539 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:16.539 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:16.539 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:16.540 Remove shared memory files 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86540 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:16.540 ************************************ 00:30:16.540 END TEST ftl_upgrade_shutdown 00:30:16.540 ************************************ 00:30:16.540 00:30:16.540 real 1m36.955s 00:30:16.540 user 2m17.320s 00:30:16.540 sys 0m23.506s 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:16.540 19:52:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:16.798 Process with pid 79833 is not found 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@1142 -- # return 0 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@14 -- # killprocess 79833 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@948 -- # '[' -z 79833 ']' 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@952 -- # kill -0 79833 00:30:16.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79833) - No such process 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79833 is not found' 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87050 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:16.798 19:52:07 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87050 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@829 -- # '[' -z 87050 ']' 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.798 19:52:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:16.798 [2024-07-15 19:52:07.443447] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:30:16.798 [2024-07-15 19:52:07.443611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87050 ] 00:30:17.057 [2024-07-15 19:52:07.615056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.314 [2024-07-15 19:52:07.900364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.249 19:52:08 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.249 19:52:08 ftl -- common/autotest_common.sh@862 -- # return 0 00:30:18.249 19:52:08 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:18.507 nvme0n1 00:30:18.507 19:52:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:18.507 19:52:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.507 19:52:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:18.765 19:52:09 ftl -- ftl/common.sh@28 -- # stores=16421c72-177f-4b66-a93e-559d5549e83b 00:30:18.765 19:52:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:18.765 19:52:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16421c72-177f-4b66-a93e-559d5549e83b 00:30:19.022 19:52:09 ftl -- ftl/ftl.sh@23 -- # killprocess 87050 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@948 -- # '[' -z 87050 ']' 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@952 -- # kill -0 87050 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@953 -- # uname 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87050 00:30:19.022 killing process with pid 87050 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87050' 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@967 -- # kill 87050 00:30:19.022 19:52:09 ftl -- common/autotest_common.sh@972 -- # wait 87050 00:30:21.596 19:52:12 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:21.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:21.855 Waiting for block devices as requested 00:30:22.113 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:22.113 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:22.113 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:22.371 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:27.631 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:27.631 Remove shared memory files 00:30:27.631 19:52:18 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:27.631 19:52:18 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:27.631 19:52:18 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:27.631 19:52:18 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:27.631 19:52:18 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:27.631 19:52:18 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:27.631 19:52:18 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:27.631 ************************************ 00:30:27.631 END TEST ftl 00:30:27.631 ************************************ 00:30:27.631 00:30:27.631 real 10m45.858s 00:30:27.631 user 13m31.147s 00:30:27.631 sys 1m28.518s 00:30:27.631 19:52:18 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.631 19:52:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:27.631 19:52:18 -- common/autotest_common.sh@1142 -- # return 0 00:30:27.631 19:52:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:27.631 19:52:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:27.631 19:52:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:27.631 19:52:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:27.631 19:52:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:27.631 19:52:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:27.631 19:52:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:27.631 19:52:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:27.631 19:52:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:27.631 19:52:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:27.631 19:52:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.631 19:52:18 -- common/autotest_common.sh@10 -- # set +x 00:30:27.631 19:52:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:27.632 19:52:18 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:27.632 19:52:18 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:27.632 19:52:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.564 INFO: APP EXITING 00:30:29.564 INFO: killing all VMs 00:30:29.564 INFO: killing vhost app 00:30:29.564 INFO: EXIT DONE 00:30:29.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:30.081 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:30.081 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:30.081 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:30.338 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:30.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:31.164 Cleaning 00:30:31.164 Removing: /var/run/dpdk/spdk0/config 00:30:31.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:31.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:31.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:31.164 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:31.164 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:31.164 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:31.164 Removing: /var/run/dpdk/spdk0 00:30:31.164 Removing: /var/run/dpdk/spdk_pid61913 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62157 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62388 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62493 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62560 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62693 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62717 00:30:31.164 Removing: /var/run/dpdk/spdk_pid62913 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63024 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63129 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63250 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63361 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63401 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63443 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63513 00:30:31.164 Removing: /var/run/dpdk/spdk_pid63624 00:30:31.164 Removing: /var/run/dpdk/spdk_pid64088 00:30:31.164 Removing: /var/run/dpdk/spdk_pid64174 00:30:31.164 Removing: /var/run/dpdk/spdk_pid64259 00:30:31.164 Removing: /var/run/dpdk/spdk_pid64275 00:30:31.164 Removing: /var/run/dpdk/spdk_pid64440 00:30:31.164 Removing: /var/run/dpdk/spdk_pid64461 00:30:31.165 Removing: /var/run/dpdk/spdk_pid64630 00:30:31.165 Removing: /var/run/dpdk/spdk_pid64653 00:30:31.165 Removing: /var/run/dpdk/spdk_pid64728 00:30:31.165 Removing: /var/run/dpdk/spdk_pid64746 00:30:31.165 Removing: /var/run/dpdk/spdk_pid64821 00:30:31.165 Removing: /var/run/dpdk/spdk_pid64850 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65043 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65085 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65170 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65258 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65295 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65378 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65425 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65477 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65523 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65574 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65622 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65674 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65726 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65778 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65830 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65881 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65929 00:30:31.165 Removing: /var/run/dpdk/spdk_pid65981 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66033 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66085 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66137 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66189 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66243 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66294 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66346 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66399 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66492 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66614 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66796 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66904 00:30:31.165 Removing: /var/run/dpdk/spdk_pid66957 00:30:31.165 Removing: /var/run/dpdk/spdk_pid67428 00:30:31.165 Removing: /var/run/dpdk/spdk_pid67537 00:30:31.165 Removing: /var/run/dpdk/spdk_pid67663 00:30:31.165 Removing: /var/run/dpdk/spdk_pid67723 00:30:31.165 Removing: /var/run/dpdk/spdk_pid67757 00:30:31.424 Removing: /var/run/dpdk/spdk_pid67834 00:30:31.424 Removing: /var/run/dpdk/spdk_pid68474 00:30:31.424 Removing: /var/run/dpdk/spdk_pid68522 00:30:31.424 Removing: /var/run/dpdk/spdk_pid69042 00:30:31.424 Removing: /var/run/dpdk/spdk_pid69150 00:30:31.424 Removing: /var/run/dpdk/spdk_pid69270 00:30:31.424 Removing: /var/run/dpdk/spdk_pid69335 00:30:31.424 Removing: /var/run/dpdk/spdk_pid69366 00:30:31.424 Removing: /var/run/dpdk/spdk_pid69402 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71286 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71435 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71445 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71457 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71497 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71501 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71513 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71558 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71562 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71579 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71624 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71628 00:30:31.424 Removing: /var/run/dpdk/spdk_pid71640 00:30:31.424 Removing: /var/run/dpdk/spdk_pid72988 00:30:31.424 Removing: /var/run/dpdk/spdk_pid73095 00:30:31.424 Removing: /var/run/dpdk/spdk_pid74509 00:30:31.424 Removing: /var/run/dpdk/spdk_pid75879 00:30:31.424 Removing: /var/run/dpdk/spdk_pid76006 00:30:31.424 Removing: /var/run/dpdk/spdk_pid76122 00:30:31.424 Removing: /var/run/dpdk/spdk_pid76245 00:30:31.424 Removing: /var/run/dpdk/spdk_pid76392 00:30:31.424 Removing: /var/run/dpdk/spdk_pid76473 00:30:31.424 Removing: /var/run/dpdk/spdk_pid76613 00:30:31.424 Removing: /var/run/dpdk/spdk_pid77001 00:30:31.424 Removing: /var/run/dpdk/spdk_pid77043 00:30:31.424 Removing: /var/run/dpdk/spdk_pid77526 00:30:31.424 Removing: /var/run/dpdk/spdk_pid77711 00:30:31.424 Removing: /var/run/dpdk/spdk_pid77821 00:30:31.424 Removing: /var/run/dpdk/spdk_pid77943 00:30:31.424 Removing: /var/run/dpdk/spdk_pid78004 00:30:31.424 Removing: /var/run/dpdk/spdk_pid78035 00:30:31.424 Removing: /var/run/dpdk/spdk_pid78328 00:30:31.424 Removing: /var/run/dpdk/spdk_pid78399 00:30:31.424 Removing: /var/run/dpdk/spdk_pid78492 00:30:31.424 Removing: /var/run/dpdk/spdk_pid78894 00:30:31.424 Removing: /var/run/dpdk/spdk_pid79045 00:30:31.424 Removing: /var/run/dpdk/spdk_pid79833 00:30:31.424 Removing: /var/run/dpdk/spdk_pid79985 00:30:31.424 Removing: /var/run/dpdk/spdk_pid80196 00:30:31.424 Removing: /var/run/dpdk/spdk_pid80300 00:30:31.424 Removing: /var/run/dpdk/spdk_pid80637 00:30:31.424 Removing: /var/run/dpdk/spdk_pid80898 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81261 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81469 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81598 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81663 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81801 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81843 00:30:31.424 Removing: /var/run/dpdk/spdk_pid81920 00:30:31.424 Removing: /var/run/dpdk/spdk_pid82109 00:30:31.424 Removing: /var/run/dpdk/spdk_pid82347 00:30:31.424 Removing: /var/run/dpdk/spdk_pid82709 00:30:31.424 Removing: /var/run/dpdk/spdk_pid83086 00:30:31.424 Removing: /var/run/dpdk/spdk_pid83471 00:30:31.424 Removing: /var/run/dpdk/spdk_pid83904 00:30:31.424 Removing: /var/run/dpdk/spdk_pid84064 00:30:31.424 Removing: /var/run/dpdk/spdk_pid84151 00:30:31.424 Removing: /var/run/dpdk/spdk_pid84732 00:30:31.424 Removing: /var/run/dpdk/spdk_pid84812 00:30:31.424 Removing: /var/run/dpdk/spdk_pid85194 00:30:31.424 Removing: /var/run/dpdk/spdk_pid85528 00:30:31.424 Removing: /var/run/dpdk/spdk_pid85954 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86076 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86130 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86201 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86262 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86331 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86540 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86610 00:30:31.424 Removing: /var/run/dpdk/spdk_pid86698 00:30:31.683 Removing: /var/run/dpdk/spdk_pid86782 00:30:31.683 Removing: /var/run/dpdk/spdk_pid86822 00:30:31.683 Removing: /var/run/dpdk/spdk_pid86906 00:30:31.683 Removing: /var/run/dpdk/spdk_pid87050 00:30:31.683 Clean 00:30:31.683 19:52:22 -- common/autotest_common.sh@1451 -- # return 0 00:30:31.683 19:52:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:31.683 19:52:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.683 19:52:22 -- common/autotest_common.sh@10 -- # set +x 00:30:31.683 19:52:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:31.683 19:52:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.683 19:52:22 -- common/autotest_common.sh@10 -- # set +x 00:30:31.683 19:52:22 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:31.683 19:52:22 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:31.683 19:52:22 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:31.683 19:52:22 -- spdk/autotest.sh@391 -- # hash lcov 00:30:31.683 19:52:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:31.683 19:52:22 -- spdk/autotest.sh@393 -- # hostname 00:30:31.683 19:52:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:31.942 geninfo: WARNING: invalid characters removed from testname! 00:30:58.481 19:52:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:00.386 19:52:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:02.290 19:52:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:04.824 19:52:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:07.429 19:52:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:09.326 19:52:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:11.293 19:53:01 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:11.293 19:53:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:11.293 19:53:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:11.293 19:53:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.293 19:53:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.293 19:53:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.293 19:53:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.293 19:53:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.293 19:53:02 -- paths/export.sh@5 -- $ export PATH 00:31:11.293 19:53:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.293 19:53:02 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:11.293 19:53:02 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:11.293 19:53:02 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721073182.XXXXXX 00:31:11.293 19:53:02 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721073182.19epz4 00:31:11.293 19:53:02 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:11.293 19:53:02 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:11.293 19:53:02 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:11.293 19:53:02 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:11.293 19:53:02 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:11.293 19:53:02 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:11.293 19:53:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:11.293 19:53:02 -- common/autotest_common.sh@10 -- $ set +x 00:31:11.293 19:53:02 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:31:11.293 19:53:02 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:11.293 19:53:02 -- pm/common@17 -- $ local monitor 00:31:11.293 19:53:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.293 19:53:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.293 19:53:02 -- pm/common@25 -- $ sleep 1 00:31:11.293 19:53:02 -- pm/common@21 -- $ date +%s 00:31:11.293 19:53:02 -- pm/common@21 -- $ date +%s 00:31:11.293 19:53:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721073182 00:31:11.552 19:53:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721073182 00:31:11.552 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721073182_collect-vmstat.pm.log 00:31:11.552 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721073182_collect-cpu-load.pm.log 00:31:12.487 19:53:03 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:12.487 19:53:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:12.487 19:53:03 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:12.487 19:53:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:12.487 19:53:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:12.487 19:53:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:12.487 19:53:03 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:12.487 19:53:03 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:12.487 19:53:03 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:12.487 19:53:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:12.487 19:53:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:12.487 19:53:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:12.487 19:53:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:12.488 19:53:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.488 19:53:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:12.488 19:53:03 -- pm/common@44 -- $ pid=88768 00:31:12.488 19:53:03 -- pm/common@50 -- $ kill -TERM 88768 00:31:12.488 19:53:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.488 19:53:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:12.488 19:53:03 -- pm/common@44 -- $ pid=88769 00:31:12.488 19:53:03 -- pm/common@50 -- $ kill -TERM 88769 00:31:12.488 + [[ -n 5201 ]] 00:31:12.488 + sudo kill 5201 00:31:12.498 [Pipeline] } 00:31:12.521 [Pipeline] // timeout 00:31:12.527 [Pipeline] } 00:31:12.546 [Pipeline] // stage 00:31:12.554 [Pipeline] } 00:31:12.573 [Pipeline] // catchError 00:31:12.586 [Pipeline] stage 00:31:12.588 [Pipeline] { (Stop VM) 00:31:12.602 [Pipeline] sh 00:31:12.876 + vagrant halt 00:31:16.162 ==> default: Halting domain... 00:31:22.727 [Pipeline] sh 00:31:22.999 + vagrant destroy -f 00:31:26.283 ==> default: Removing domain... 00:31:26.857 [Pipeline] sh 00:31:27.146 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:27.154 [Pipeline] } 00:31:27.168 [Pipeline] // stage 00:31:27.173 [Pipeline] } 00:31:27.188 [Pipeline] // dir 00:31:27.194 [Pipeline] } 00:31:27.211 [Pipeline] // wrap 00:31:27.217 [Pipeline] } 00:31:27.235 [Pipeline] // catchError 00:31:27.245 [Pipeline] stage 00:31:27.248 [Pipeline] { (Epilogue) 00:31:27.264 [Pipeline] sh 00:31:27.541 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:34.114 [Pipeline] catchError 00:31:34.116 [Pipeline] { 00:31:34.131 [Pipeline] sh 00:31:34.415 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:34.673 Artifacts sizes are good 00:31:34.683 [Pipeline] } 00:31:34.703 [Pipeline] // catchError 00:31:34.715 [Pipeline] archiveArtifacts 00:31:34.722 Archiving artifacts 00:31:34.867 [Pipeline] cleanWs 00:31:34.879 [WS-CLEANUP] Deleting project workspace... 00:31:34.879 [WS-CLEANUP] Deferred wipeout is used... 00:31:34.885 [WS-CLEANUP] done 00:31:34.887 [Pipeline] } 00:31:34.908 [Pipeline] // stage 00:31:34.914 [Pipeline] } 00:31:34.932 [Pipeline] // node 00:31:34.938 [Pipeline] End of Pipeline 00:31:34.981 Finished: SUCCESS